Image processing method, image processing device, electronic equipment and computer readable storage medium

文档序号:9411 发布日期:2021-09-17 浏览:20次 中文

阅读说明:本技术 图像处理方法、装置、电子设备和计算机可读存储介质 (Image processing method, image processing device, electronic equipment and computer readable storage medium ) 是由 谢炜航 于 2020-03-17 设计创作,主要内容包括:本公开提供一种图像处理方法、装置以及电子设备和计算机可读存储介质,该方法包括:获取第一待贴片图像和第一纹理图像,第一待贴片图像和第一纹理图像均为N边形,N为大于2的正整数;对第一待贴片图像进行分割处理,以获得第一待贴片图像的第一n边形,n为大于2的正整数;对第一纹理图像进行分割处理,以获得第一纹理图像的第二n边形,第一n边形和第二n边形具有第一目标映射关系;将各个第二n边形按照第一目标映射关系映射至对应第一n边形,以完成对第一待贴片图像的纹理贴图处理。本公开提供的技术方案可以简化了增强现实场景下的图像渲染流程,节约服务器资源。(The present disclosure provides an image processing method, an apparatus, an electronic device, and a computer-readable storage medium, the method including: acquiring a first image to be tiled and a first texture image, wherein the first image to be tiled and the first texture image are both N-sided polygons, and N is a positive integer greater than 2; segmenting the first image to be tiled to obtain a first n-polygon of the first image to be tiled, wherein n is a positive integer greater than 2; performing segmentation processing on the first texture image to obtain a second n-polygon of the first texture image, wherein the first n-polygon and the second n-polygon have a first target mapping relation; and mapping each second n-polygon to the corresponding first n-polygon according to the first target mapping relation so as to finish texture mapping processing of the first image to be tiled. The technical scheme provided by the disclosure can simplify the image rendering process in the augmented reality scene and save server resources.)

1. An image processing method, comprising:

acquiring a first image to be tiled and a first texture image, wherein the first image to be tiled and the first texture image are both N-sided polygons, and N is a positive integer greater than 2;

performing segmentation processing on the first image to be tiled to obtain a first n-polygon of the first image to be tiled, wherein n is a positive integer greater than 2;

performing segmentation processing on the first texture image to obtain a second n-polygon of the first texture image, wherein the first n-polygon and the second n-polygon have a first target mapping relation;

and mapping each second n-polygon to a corresponding first n-polygon according to the first target mapping relation so as to perform texture mapping processing on the first image to be tiled.

2. The method of claim 1, wherein the first n-polygon is a first triangle, and the first image to be tiled includes a target vertex; wherein, the processing of segmenting the first image to be tiled is performed to obtain a first n-polygon of the first image to be tiled, including:

respectively carrying out m equal division on each edge of the first image to be pasted to obtain m equal division points on each edge, wherein m is a positive integer larger than 1;

and connecting the m equal-dividing points on each side of the first image to be tiled with the target vertex to obtain the first n-polygon.

3. The method according to claim 1, wherein the first n-polygon is a first triangle, the first image to be tiled is a quadrilateral, and the quadrilateral includes a target opposite side; wherein, the processing of segmenting the first image to be tiled is performed to obtain a first n-polygon of the first image to be tiled, including:

respectively performing m equal division on the target opposite sides of the first image to be pasted to obtain m equal division points on the target opposite sides, wherein m is a positive integer greater than 1;

correspondingly connecting m equal division points on the target opposite side of the first image to be pasted to obtain a plurality of target quadrangles;

connecting the diagonals of the plurality of target quadrilaterals to obtain the first triangle.

4. The method of claim 1, wherein obtaining the first image to be tiled comprises:

acquiring an image to be processed including a target object;

determining a standard graph matched with the target object, wherein the standard graph comprises vertex feature points;

determining a target feature point matched with the vertex feature point of the standard graph in the image to be processed;

connecting the target characteristic points in sequence, and determining an image of the target object in the image to be processed;

and determining the first image to be tiled according to the image of the target object.

5. The method of claim 4, wherein determining a standard graph matching the target object, the standard graph including vertex feature points, comprises:

performing feature extraction processing on the image to be processed to obtain feature points of the image to be processed;

and determining a standard graph matched with the image to be processed according to the characteristic points of the image to be processed.

6. The method of claim 1, wherein acquiring the first image to be tiled further comprises:

acquiring an image to be processed including a target object;

determining an image of the target object in the image to be processed by an image recognition technology;

and determining the first image to be tiled according to the image of the target object.

7. The method according to claim 4 or 6, wherein determining the first image to be tiled from the image of the target object comprises:

adding a background to the image of the target object to generate the first image to be tiled, wherein the target object is at a preset position of the first image to be tiled.

8. The method of claim 1, further comprising:

and rendering the first n-polygon after the map is formed through the target rendering frame so as to display the first n-polygon on a target screen.

9. The method of claim 1, further comprising:

acquiring a second texture image, wherein the second texture image and the first texture image are both from a target texture video;

performing segmentation processing on the second texture image to obtain a third n-polygon of the second texture image, wherein the first n-polygon and the third n-polygon have a second target mapping relationship;

and mapping each third n-polygon to the corresponding first n-polygon according to the second target mapping relation so as to perform video texture mapping processing on the first image to be tiled.

10. The method of claim 1, further comprising:

acquiring a second image to be pasted, wherein the first image to be pasted and the second image to be pasted are both acquired by target equipment, and the second image to be pasted is the next frame image of the first image to be pasted;

acquiring a second texture image, wherein the second texture image and the first texture image are both from a target texture video, and the second texture image is a next frame image of the first texture image;

mapping the second texture image to the second image to be tiled according to the image processing method of claim 1;

and sequentially rendering the first to-be-tiled image and the second to-be-tiled image after the tiling to a target screen through a target rendering frame for displaying so as to perform video texture mapping processing on the dynamic image acquired by the target equipment.

11. The method of claim 10, wherein obtaining a second image to be tiled comprises:

acquiring an image of a target object through the target equipment to obtain an image to be processed at the current moment;

acquiring an image to be processed at the last moment acquired by the target equipment, wherein the image to be processed at the last moment comprises vertex characteristic point information of a target object;

processing the vertex characteristic point information of the target object to be processed at the last moment to obtain the vertex characteristic point information of the target object in the image to be processed at the current moment;

determining an image of the target object at the current moment in the image to be processed at the current moment according to the vertex characteristic point information of the target object at the current moment;

and determining the second image to be pasted according to the image of the target object at the current moment.

12. An image processing apparatus, comprising:

the image obtaining module is configured to obtain a first image to be tiled and a first texture image, wherein the first image to be tiled and the first texture image are both N-sided polygons, and N is a positive integer greater than 2;

the first segmentation module is configured to perform segmentation processing on the first image to be tiled to obtain a first n-polygon of the first image to be tiled, wherein n is a positive integer greater than 2;

a second segmentation module configured to perform segmentation processing on the first texture image to obtain a second n-polygon of the first texture image, wherein the first n-polygon and the second n-polygon have a first target mapping relationship;

and the mapping module is configured to map each second n-polygon to the corresponding first n-polygon according to the first target mapping relation so as to perform texture mapping processing on the first image to be tiled.

13. An electronic device, comprising:

one or more processors;

a storage device for storing one or more programs,

when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-11.

14. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1-11.

Technical Field

The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.

Background

Texture mapping is a process of mapping texels in texture space to screen space pixels. I.e., a process that uses an image, function, or other data source to change the appearance of an image somewhere. Currently, images are mainly texture mapped through Open Graphics Library (Open Graphics Library, a professional graphical program interface of cross-programming language, cross-platform programming interface specification) so that the texture images can be mapped to screen spaces with different resolutions or different sizes.

With the popularization of texture mapping technology, more and more program software is added with texture mapping technology in augmented reality scenes, such as the construction of a character frame in a certain game. Generally, the texture mapping technology will greatly occupy the computing resources of the CPU of the mobile device when the software is running, so that the image after texture mapping will be dragged and dropped when rendered on the screen.

Therefore, it is very meaningful to find a texture mapping method with less CPU computation resource occupation in an augmented reality scene.

It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.

Disclosure of Invention

The embodiment of the disclosure provides an image processing method and device, electronic equipment and a computer-readable storage medium, which can map a first texture image to a first image to be tiled quickly and conveniently in an augmented reality scene.

Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.

The embodiment of the disclosure provides an image processing method, which includes: acquiring a first image to be tiled and a first texture image, wherein the first image to be tiled and the first texture image are both N-sided polygons, and N is a positive integer greater than 2; performing segmentation processing on the first image to be tiled to obtain a first n-polygon of the first image to be tiled, wherein n is a positive integer greater than 2; performing segmentation processing on the first texture image to obtain a second n-polygon of the first texture image, wherein the first n-polygon and the second n-polygon have a first target mapping relation; and mapping each second n-polygon to a corresponding first n-polygon according to the first target mapping relation so as to perform texture mapping processing on the first image to be tiled.

In some embodiments, the first n-polygon is a first triangle, the first image to be tiled includes a target vertex; wherein, the processing of segmenting the first image to be tiled is performed to obtain a first n-polygon of the first image to be tiled, including: respectively carrying out m equal division on each edge of the first image to be pasted to obtain m equal division points on each edge, wherein m is a positive integer larger than 1; and connecting the m equal-dividing points on each side of the first image to be tiled with the target vertex to obtain the first n-polygon.

In some embodiments, the first n-polygon is a first triangle, the first image to be tiled is a quadrilateral, the quadrilateral includes a target opposite side; wherein, the processing of segmenting the first image to be tiled is performed to obtain a first n-polygon of the first image to be tiled, including: respectively performing m equal division on the target opposite sides of the first image to be pasted to obtain m equal division points on the target opposite sides, wherein m is a positive integer greater than 1; correspondingly connecting m equal division points on the target opposite side of the first image to be pasted to obtain a plurality of target quadrangles; connecting the diagonals of the plurality of target quadrilaterals to obtain the first triangle.

In some embodiments, acquiring a first image to be tiled includes: acquiring an image to be processed including a target object; determining a standard graph matched with the target object, wherein the standard graph comprises vertex feature points; determining a target feature point matched with the vertex feature point of the standard graph in the image to be processed; connecting the target characteristic points in sequence, and determining an image of the target object in the image to be processed; and determining the first image to be tiled according to the image of the target object.

In some embodiments, determining a standard graph matching the target object, the standard graph including vertex feature points, includes: performing feature extraction processing on the image to be processed to obtain feature points of the image to be processed; and determining a standard graph matched with the image to be processed according to the characteristic points of the image to be processed.

In some embodiments, acquiring the first image to be tiled further comprises: acquiring an image to be processed including a target object; determining an image of the target object in the image to be processed by an image recognition technology;

and determining the first image to be tiled according to the image of the target object.

In some embodiments, determining the first image to be tiled from the image of the target object comprises: adding a background to the image of the target object to generate the first image to be tiled, wherein the target object is at a preset position of the first image to be tiled.

In some embodiments, the image processing method further comprises: and rendering the first n-polygon after the map is formed through the target rendering frame so as to display the first n-polygon on a target screen.

In some embodiments, the image processing method further comprises: acquiring a second texture image, wherein the second texture image and the first texture image are both from a target texture video; performing segmentation processing on the second texture image to obtain a third n-polygon of the second texture image, wherein the first n-polygon and the third n-polygon have a second target mapping relationship; and mapping each third n-polygon to the corresponding first n-polygon according to the second target mapping relation so as to perform video texture mapping processing on the first image to be tiled.

In some embodiments, the image processing method further comprises: acquiring a second image to be pasted, wherein the first image to be pasted and the second image to be pasted are both acquired by target equipment, and the second image to be pasted is the next frame image of the first image to be pasted; acquiring a second texture image, wherein the second texture image and the first texture image are both from a target texture video, and the second texture image is a next frame image of the first texture image; mapping the second texture image to the second image to be tiled according to the image processing method; and sequentially rendering the first to-be-tiled image and the second to-be-tiled image after the tiling to a target screen through a target rendering frame for displaying so as to perform video texture mapping processing on the dynamic image acquired by the target equipment.

In some embodiments, acquiring a second image to be tiled includes: acquiring an image of a target object through the target equipment to obtain an image to be processed at the current moment; acquiring an image to be processed at the last moment acquired by the target equipment, wherein the image to be processed at the last moment comprises vertex characteristic point information of a target object; processing the vertex characteristic point information of the target object to be processed at the last moment to obtain the vertex characteristic point information of the target object in the image to be processed at the current moment; determining an image of the target object at the current moment in the image to be processed at the current moment according to the vertex characteristic point information of the target object at the current moment; and determining the second image to be pasted according to the image of the target object at the current moment.

An embodiment of the present disclosure provides an image processing apparatus, including: the image segmentation device comprises an image acquisition module, a first segmentation module, a second segmentation module and a mapping module.

The image obtaining module may be configured to obtain a first image to be tiled and a first texture image, where the first image to be tiled and the first texture image are both N-sided polygons, and N is a positive integer greater than 2; the first segmentation module may be configured to perform segmentation processing on the first image to be tiled to obtain a first n-polygon of the first image to be tiled, where n is a positive integer greater than 2; the second segmentation module may be configured to perform segmentation processing on the first texture image to obtain a second n-polygon of the first texture image, the first n-polygon and the second n-polygon having a first target mapping relationship; the mapping module may be configured to map each second n-polygon to a corresponding first n-polygon according to the first target mapping relationship, so as to perform texture mapping processing on the first image to be tiled.

An embodiment of the present disclosure provides an electronic device, including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the image processing method of any one of the above.

The disclosed embodiments provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements an image processing method as described in any of the above.

According to the image processing method and device, the electronic device and the computer readable storage medium provided by the embodiment of the disclosure, the first n-polygon and the second n-polygon with a mapping relation are obtained by segmenting the first image to be tiled and the first texture image; and then directly mapping the second n-polygon onto the first n-polygon according to the mapping relation so as to complete the mapping operation of the first texture image to the first image to be tiled. According to the method, on one hand, a pose matrix between the first image to be tiled and the first texture image does not need to be acquired, and then the first texture image does not need to be mapped to a coordinate system where the first image to be tiled is located, so that CPU (Central processing Unit) operation resources are greatly saved, and the problem of frame dragging or frame dropping is avoided; on the other hand, the number of the first n-polygons can be adjusted according to the performance of the equipment (including the equipment CPU and the GPU), the number of the first n-polygons can be larger when the performance of the equipment is better so as to reduce deformation generated during patch mounting, and the number of the first n-polygons can be smaller when the performance of the equipment is poorer so as to avoid the problems of frame dropping or frame dragging and the like caused by equipment operation blockage. Because the number of the first n-polygon (or the second n-polygon) is adjustable, the method can be adapted to various devices without algorithm optimization, so that the display effects of the images after mapping on the various devices are consistent.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. The drawings described below are merely some embodiments of the present disclosure, and other drawings may be derived from those drawings by those of ordinary skill in the art without inventive effort.

Fig. 1 shows a schematic diagram of an exemplary system architecture of an image processing method or an image processing apparatus applied to an embodiment of the present disclosure.

Fig. 2 is a schematic diagram illustrating a configuration of a computer system applied to an image processing apparatus according to an exemplary embodiment.

Fig. 3 is a schematic diagram illustrating a P4P algorithm solving according to the related art.

Fig. 4 is a diagram showing an image processing system according to the related art.

FIG. 5 is a flow diagram illustrating an image processing method according to an exemplary embodiment.

FIG. 6 is a diagram illustrating image segmentation, according to an exemplary embodiment.

FIG. 7 is a diagram illustrating image segmentation, according to an exemplary embodiment.

FIG. 8 is a diagram illustrating image deformation according to an exemplary embodiment.

FIG. 9 is a diagram illustrating image segmentation, according to an exemplary embodiment.

Fig. 10 is a flowchart of step S01 in fig. 5 in an exemplary embodiment.

FIG. 11 is a diagram illustrating the determination of a target object in an image to be processed according to an exemplary embodiment.

FIG. 12 is a diagram illustrating the determination of a target object in an image to be processed according to an exemplary embodiment.

Fig. 13 illustrates a shape of an image of a target object separated from an image to be processed according to an exemplary embodiment.

Fig. 14 is a flowchart of step S12 in fig. 10 in an exemplary embodiment.

Fig. 15 is a flowchart of step S14 in fig. 10 in an exemplary embodiment.

FIG. 15A is a rendered schematic diagram of an image, according to an example embodiment.

Fig. 16 is a flowchart of step S01 in fig. 5 in an exemplary embodiment.

FIG. 17 is a flowchart of step S02 of FIG. 5 in an exemplary embodiment.

FIG. 18 is a diagram illustrating image segmentation, according to an exemplary embodiment.

FIG. 19 is a flowchart of step S02 of FIG. 5 in an exemplary embodiment.

FIG. 20 illustrates a method of image segmentation according to an exemplary embodiment.

FIG. 21 is a method of image processing in accordance with an example embodiment.

FIG. 22 illustrates an image processing method according to an exemplary embodiment.

FIG. 23 is a flowchart illustrating step S08 of FIG. 22 in an exemplary embodiment.

FIG. 24 is an illustration of an image processing system in accordance with an exemplary embodiment.

FIG. 25 illustrates an image processing system according to an exemplary embodiment.

Fig. 26 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment.

Detailed Description

Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals denote the same or similar parts in the drawings, and thus, a repetitive description thereof will be omitted.

The described features, structures, or characteristics of the disclosure may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.

The drawings are merely schematic illustrations of the present disclosure, in which the same reference numerals denote the same or similar parts, and thus, a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.

The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and steps, nor do they necessarily have to be performed in the order described. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.

In this specification, the terms "a", "an", "the", "said" and "at least one" are used to indicate the presence of one or more elements/components/etc.; the terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements/components/etc. other than the listed elements/components/etc.; the terms "first," "second," and "third," etc. are used merely as labels, and are not limiting on the number of their objects.

The following detailed description of exemplary embodiments of the disclosure refers to the accompanying drawings.

Fig. 1 shows a schematic diagram of an exemplary system architecture of an image processing method or an image processing apparatus to which the embodiments of the present disclosure can be applied.

As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.

The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may be various electronic devices having display screens and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, wearable devices, virtual reality devices, smart homes, and the like.

The server 105 may be a server that provides various services, such as a background management server that provides support for devices operated by users using the terminal apparatuses 101, 102, 103. The background management server can analyze and process the received data such as the request and feed back the processing result to the terminal equipment.

The server 105 may, for example, obtain a first image to be tiled and a first texture image, both of which are N-polygons, N being a positive integer greater than 2; the server 105 may, for example, perform a segmentation process on the first image to be tiled to obtain a first n-polygon of the first image to be tiled, n being a positive integer greater than 2; server 105 may, for example, perform a segmentation process on the first texture image to obtain a second n-polygon of the first texture image, the first n-polygon and the second n-polygon having a first target mapping relationship; the server 105 may map each second n-polygon to a corresponding first n-polygon according to the first target mapping relationship, for example, to perform texture mapping processing on the first image to be tiled.

It should be understood that the number of terminal devices, networks and servers in fig. 1 is only illustrative, and the server 105 may be a physical server or may be composed of a plurality of servers, and there may be any number of terminal devices, networks and servers according to actual needs.

Referring now to FIG. 2, a block diagram of a computer system 200 suitable for implementing a terminal device of the embodiments of the present application is shown. The terminal device shown in fig. 2 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.

As shown in fig. 2, the computer system 200 includes a Central Processing Unit (CPU)201 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)202 or a program loaded from a storage section 208 into a Random Access Memory (RAM) 203. In the RAM 203, various programs and data necessary for the operation of the system 200 are also stored. The CPU 201, ROM 202, and RAM 203 are connected to each other via a bus 204. An input/output (I/O) interface 205 is also connected to bus 204.

The following components are connected to the I/O interface 205: an input portion 206 including a keyboard, a mouse, and the like; an output section 207 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 208 including a hard disk and the like; and a communication section 209 including a network interface card such as a LAN card, a modem, or the like. The communication section 209 performs communication processing via a network such as the internet. A drive 210 is also connected to the I/O interface 205 as needed. A removable medium 211, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is mounted on the drive 210 as necessary, so that a computer program read out therefrom is installed into the storage section 208 as necessary.

In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 209 and/or installed from the removable medium 211. The above-described functions defined in the system of the present application are executed when the computer program is executed by the Central Processing Unit (CPU) 201.

It should be noted that the computer readable storage medium shown in the present application can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

The modules and/or units and/or sub-units described in the embodiments of the present application may be implemented by software, and may also be implemented by hardware. The described modules and/or units and/or sub-units may also be provided in a processor, and may be described as: a processor includes a transmitting unit, an obtaining unit, a determining unit, and a first processing unit. Wherein the names of such modules and/or units and/or sub-units do not in some way constitute a limitation on the modules and/or units and/or sub-units themselves.

As another aspect, the present application also provides a computer-readable storage medium, which may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable storage medium carries one or more programs which, when executed by a device, cause the device to perform functions including: acquiring a first image to be tiled and a first texture image, wherein the first image to be tiled and the first texture image are both N-sided polygons, and N is a positive integer greater than 2; performing segmentation processing on the first image to be tiled to obtain a first n-polygon of the first image to be tiled, wherein n is a positive integer greater than 2; performing segmentation processing on the first texture image to obtain a second n-polygon of the first texture image, wherein the first n-polygon and the second n-polygon have a first target mapping relation; and mapping each second n-polygon to a corresponding first n-polygon according to the first target mapping relation so as to perform texture mapping processing on the first image to be tiled.

With the popularization of computer technology, the application of image pasting technology is becoming more and more widespread, for example, the application can be applied to image beautification (for example, pasting glasses on a face image), for example, the application can be applied to image introduction (when a target image is scanned, a related image can be displayed on the target image or around the target image, and the like), further, for example, the application can be applied to construction of a game 3D character, and the like.

In the related art, a PNP (passive-N-Point, N-Point Perspective) technology is usually adopted to map a texture image of a texture space onto an image to be tiled, and then a rendering frame renders the image to be tiled into a screen for display.

Generally, the PNP technology can calculate a Projection relationship between a coordinate system of a texture space and a coordinate system of an image to be tiled through N feature points in the coordinate system of the texture space and N corresponding feature points in the coordinate system of the image to be tiled, so as to obtain a camera pose or an object pose (i.e., obtain a Model-View-Projection (MVP) matrix).

At present, at least three methods for solving the PNP problem can be supported in an image processing library OpenCV (a cross-platform computer vision library) commonly used for computer vision, which are P3P (Perspective-3-Point, 3-Point Perspective), EPNP (Efficient Perspective-N-Point), and ITERATIVE (a default PNP solving method).

The following will explain the specific steps of the image mapping method in the related art by taking the P4P (Perspective-4-Point, 4-Point Perspective) method as an example.

The P4P method first solves four possible matrix transformation relations (three feature points of the coordinate system of the texture image and three feature points of the coordinate system of the image to be tiled have a one-to-one correspondence), then re-projects the fourth feature point of the texture image to the coordinate system of the texture image through the four matrix transformation relations, and determines the final matrix transformation relation (i.e., MVP matrix, i.e., pose matrix) through the technique of minimizing re-projection errors.

The solving process of the P4P technique will be explained below.

Fig. 3 is a diagram illustrating a solution to P4P according to the related art.

Let P be the camera corresponding to the image to be tiled, A, B, C, D be four feature points in the target texture image (let the target texture image be in the world coordinate system), and a, b, c, d be four feature points in the image to be tiled (let the image to be tiled be in the pixel coordinate system).

As shown in fig. 3, four sets of approximate camera poses are solved from A, B, C and a, b, c.

In this embodiment, the camera internal reference matrix and the distortion parameters are known, and the coordinates of A, B, C, a, b, and c at the point P, that is, the coordinates in the camera space, can be obtained by using the similarity of the camera internal reference matrix and the distortion parameters and the coordinates of a, b, and c in the triangle space. From the cosine theorem there are:

PA2+PB2-2PA·PBcos<a,b>=AB2 (1)

PA2+PC2-2PA·PCcos<a,c>=AC2 (2)

PB2+PC2-2PB·PBcos<b,c>=BC2 (3)

divided by PC at the same time2Elimination and replacement using the following formula:

after the element replacement, the equations (1) to (3) can be rewritten into the equations (4) to (6):

x2+y2-2x·ycos<a,b>=u (4)

x2+1-2xcos<a,c>=wu (5)

y2+1-2ycos<b,c>=vu (6)

wherein w, v, cos < a, b >, cos < a, c >, cos < b, c > are known, and the equations (4) to (6) are solved, and finally four MVP matrixes can be obtained.

Combining the feature point D with the four MVP matrices, four mapping points of the feature point D can be found in the coordinate system of abc, then the reprojection errors between the four mappings and the point D can be found, and the MVP matrix corresponding to the minimum error can be determined to be a proper solution.

After the final MVP matrix is obtained by the P4P technique, all points in the coordinate system of ABCD can be mapped to the coordinate system of ABCD, thereby completing the image mapping process.

Fig. 4 is a diagram showing an image processing system according to the related art. The image processing method shown in fig. 4 may employ the above-described mapping method. The method specifically comprises the following steps:

the target user 401 scans the target object through the target device 402 to obtain a to-be-processed image (it is understood that the to-be-processed image includes not only the target object but also background information); the target device 402 uploads the image to be processed to the server 403, so that the server 403 determines a target object image from the image to be processed; the server 403 returns the standard graph, the texture graph, and the 4 pieces of target feature point information in the standard graph, and the corresponding 4 pieces of target feature point information in the image to be tiled to the target device 402 (where the standard graph and the texture graph should be in the same coordinate system); the target device 402 may process the target feature point information in the standard diagram and the image to be pasted by using the P4P technique, so as to obtain an MVP matrix between a coordinate system where the standard diagram is located and a coordinate system where the image to be pasted is located; the target device 402 maps the texture image onto the image to be tiled according to the MVP matrix between the coordinate system of the standard drawing and the coordinate system of the image to be tiled, and dyes the image to be tiled onto the screen of the target device 402 through a target rendering framework (e.g., OpenGL).

In the above embodiment, the existing image processing system needs to use the P4P technology to map the texture image onto the image to be tiled, and the P4P algorithm scheme requires a large amount of operations, and occupies a large amount of CPU operation resources of the device, resulting in a reduced CPU rendering capability. In the related art, if synchronous computational rendering is adopted, frames are dropped, and a user can see obvious pause when the frame rate is lower than 30 frames. If asynchronous calculation rendering is adopted, the video texture and the camera texture cannot be accurately attached, and a phenomenon that a user side sees a dragged frame is caused.

The embodiment of the disclosure provides an image mapping method with less algorithm CPU (Central processing Unit) operation resource occupation, so as to improve the image rendering capability.

FIG. 5 is a flow diagram illustrating an image processing method according to an exemplary embodiment. The method provided by the embodiment of the present disclosure may be processed by any electronic device with computing processing capability, for example, the server 105 and/or the terminal devices 102 and 103 in the embodiment of fig. 1 described above, and in the following embodiment, the server 105 is taken as an execution subject for example, but the present disclosure is not limited thereto.

Referring to fig. 5, an image processing method provided by an embodiment of the present disclosure may include the following steps.

In step S01, a first image to be tiled and a first texture image are obtained, where the first image to be tiled and the first texture image are both N-polygons, and N is a positive integer greater than 2.

When a museum or art gallery visits, a user desires to be able to view other works, written presentations, and the like related to a corresponding painting at the same time when scanning the painting using a target device (e.g., a mobile phone). In this case, it may be necessary to perform image mapping or video mapping on the scanned image of the target device to map other related works or introductions to the scanned painting.

In some embodiments, the first image to be tiled may be any one of the photos that need to be tiled, and the first texture image may be an image that needs to be tiled on the image to be tiled.

It should be noted that, in order to be able to better paste the first texture image onto the first image to be tiled, the first texture image and the first image to be tiled may include the same number of edges. For example, if the first to-be-tiled image is a quadrilateral, then the first texture image should also be a quadrilateral.

It is understood that, in order to ensure the mapping effect, it may be ensured that the number of edges of the first texture image and the first image to be tiled is consistent and the edge length ratio is approximately consistent (for example, the first image to be tiled is a rectangle, the ratio of the left edge to the upper edge is 1:2, the first texture image is a rectangle, and the ratio of the left edge to the upper edge is 1: 2).

In step S02, the first image to be tiled is subjected to segmentation processing to obtain a first n-polygon of the first image to be tiled, where n is a positive integer greater than 2.

In some embodiments, the first n-polygon may be a triangle, and may also be a quadrangle, a pentagon, and other shapes, which are not limited by the present disclosure.

However, since the basic rendering element of the existing commonly used image rendering framework (e.g., OpenGL) is a triangle, the first n-polygon is exemplified as a triangle in the present disclosure.

In some embodiments, if the first image to be tiled is a quadrilateral as shown in fig. 6, the opposite corners of the quadrilateral may be connected to divide the quadrilateral into two triangles for tiling and the like. If the first image to be tiled is another polygon (e.g., a pentagon) as shown in fig. 7, a vertex of the pentagon may be connected to another vertex to divide the pentagon into three triangles for performing a mapping operation and the like. Obviously, there may be many methods for segmenting the image to be tiled into triangles, and the present disclosure does not limit this.

The basic idea of the embodiments of the present disclosure is to segment the first image to be tiled and the first texture image using the same segmentation method (e.g., both connecting the diagonal corners of a quadrilateral) to segment the first image to be tiled and the first texture image into a plurality of triangles, and then paste the triangles of the first texture image onto the corresponding triangles of the first image to be tiled.

When the triangles of the first texture image are mapped onto the triangles of the first image to be tiled, certain stitching exists among the triangles of the first texture image. However, since the proportions of the edges of the first texture image and the first to-be-tiled image may not be completely consistent, the proportions of the edges of the triangle divided from the first texture image and the triangle divided from the first to-be-tiled image are not necessarily consistent, and therefore, when the triangle of the first texture image is tiled to the corresponding triangle of the first to-be-tiled image, the triangle of the first texture image needs to be scaled and stretched to a certain extent. However, once the triangle corresponding to the first texture image is stretched and scaled, the deformation as shown in fig. 8 is generated during stitching (for obvious positioning, the triangle is rasterized in fig. 8). Or it can be understood that the above mentioned distortion at the diagonal stitching position as shown in fig. 8 is generated because the vertex triangles in the rendering frame are directional, and the distortion is not obvious when the work is scanned by the camera at the beginning (because the two rendering triangles are in the same lifting proportion or in the same normal direction when in the right direction).

In order to make the deformation inconspicuous, the embodiment divides the image to be tiled into a plurality of first n-polygons (e.g., triangles) so as to uniformly distribute the deformation to each small first n-polygon.

For example, assuming that a first n-polygon is a first triangle, the first image to be tiled includes a target vertex, and m equal divisions may be performed on each side of the first image to be tiled, to obtain m equal divisions on each side, where m is a positive integer greater than 1; and then connecting the m equal points on each side of the first image to be tiled with the target vertex to obtain the first n-polygon.

In addition, if the first n-polygon is a first triangle, the first image to be tiled is a quadrangle, and the quadrangle includes a target opposite side. As shown in the left diagram of fig. 9, m equal divisions may also be performed on the target opposite sides of the first image to be tiled, to obtain m equal divisions on the target opposite sides, where m is a positive integer greater than 1; then correspondingly connecting m equally-divided points on the target opposite side of the first image to be pasted to obtain a plurality of target quadrangles; finally, the opposite angles of the target quadrangles are connected to obtain the first triangle.

The actual measurement shows that the more the number of the first n-polygon is, the more the number of the required rendering vertexes is, and the higher the requirement on the performance of the equipment is. At present, in practical application, the number of the first n-polygon can be 12-30.

In step S03, a segmentation process is performed on the first texture image to obtain a second n-polygon of the first texture image, where the first n-polygon and the second n-polygon have a first target mapping relationship.

In some embodiments, the same segmentation method may be used to segment the first texture image to obtain a corresponding second n-polygon of the first segmented image, it being understood that the number of sides of the second n-polygon is the same as the number of sides of the first n-polygon. If the first n-polygon is a 4-polygon, the second n-polygon should also be a 4-polygon, and the first n-polygon and the second n-polygon should have a first target mapping relationship, where the first mapping relationship may refer to a correspondence relationship between the first n-polygon and the second n-polygon. As shown in fig. 9, fig. 9(a) is a first triangle obtained by segmenting the image to be tiled, fig. 9(b) is a second target triangle obtained by segmenting the first texture image, it is obvious that the first triangle S1 and the second triangle S2 have corresponding mapping relationship, and the second triangle S2 can be mapped onto the first triangle S1 to realize the image mapping operation.

In step S04, each second n-polygon is mapped to a corresponding first n-polygon according to the first target mapping relationship, so as to perform texture mapping processing on the first image to be tiled.

In some embodiments, the vertices of the two triangles may be aligned during mapping, and then stretched and shrunk. Of course, this disclosure does not limit the specific mapping method.

In some embodiments, after the mapping process of the image to be tiled is completed, the first n-polygon after mapping may be rendered by the target rendering framework for display on the target screen. The target rendering framework may be an OpenGL rendering engine, and may also be used in other rendering engines such as a Metal (a rendering application programming interface), which is not limited in this disclosure.

In the image processing method provided by the embodiment, a first n-polygon and a second n-polygon having a mapping relationship are obtained by segmenting a first image to be tiled and a first texture image; and then directly mapping the second n-polygon onto the first n-polygon according to the mapping relation so as to complete the mapping operation of the first texture image to the first image to be tiled. According to the method, on one hand, a pose matrix between the first image to be tiled and the first texture image does not need to be acquired, and then the first texture image does not need to be mapped to a coordinate system where the first image to be tiled is located, so that CPU (Central processing Unit) operation resources are greatly saved, and the phenomenon of frame dragging or frame dropping in the rendering process is avoided; on the other hand, the number of the first n-shaped edges can be adjusted according to the equipment performance, the number of the first n-shaped edges can be larger when the equipment performance is better, so that the deformation generated during the surface mounting is reduced, the number of the first n-shaped edges can be smaller when the equipment performance is poorer, and the problems of frame dropping or frame dragging and the like caused by equipment operation blockage are further avoided. Because the number of the first n-polygon (or the second n-polygon) is adjustable, the method can be adapted to various devices without algorithm optimization, so that the display effects of the images after mapping on the various devices are consistent.

Fig. 10 is a flowchart of step S01 in fig. 5 in an exemplary embodiment. Referring to fig. 10, the above-mentioned step S01 may include the following steps.

In step S011, an image to be processed including a target object is acquired.

In some embodiments, the image to be processed may include not only target object (any item or object that may be captured) information but also background information. For example, in an art gallery, a user scans a target image, and the image scanned by the target user and including background information may be a to-be-processed image. The image shown in fig. 11(1) or fig. 12(1) may be the image to be processed.

In step S012, a standard map that matches the target object is determined, the standard map including vertex feature points.

In some embodiments, the standard graph may refer to an original picture including only the target object, as shown in fig. 11(2) or fig. 12 (2). Fig. 11(2) may be a standard diagram of the target object in fig. 11(1), and fig. 12(2) may be a standard diagram of the target object in fig. 12 (1).

When the mapping system is made for paintings in a museum, corresponding standard drawings can be extracted and prestored for each painting in advance, so that the image of a target object can be determined in the to-be-processed image obtained by scanning a target user according to the standard drawings.

Of course, if the separation of the target object and the background in the image to be processed is obvious, the gray-scale image of the image to be processed can be directly used and the edge recognition technology is combined to determine the image of the target object and the vertex feature point information of the image of the target object from the image to be processed.

In some embodiments, the standard graph may also be an N-polygon including a plurality of feature point information, where the plurality of feature point information includes N vertex feature point information (e.g., coordinate information of the N vertex feature points in a coordinate system of the standard graph).

In step S013, target feature points that match the vertex feature points of the standard map are determined in the image to be processed.

In some embodiments, the image to be processed may be subjected to feature extraction by an image feature point extraction technique, and feature matching is performed in the standard atlas by an image matching technique to determine a standard atlas matched with the image to be processed in a plurality of standard atlases in the standard atlas.

In some embodiments, after the standard graph matched with the image to be processed is determined, a target feature point is determined in the image to be processed according to the N vertex feature points of the standard graph, and the target feature point is the vertex feature point of the target object in the image to be processed. As shown in fig. 11, (1) of fig. 11 represents a to-be-processed image including not only the target object but also the background information, and (2) of fig. 11 represents a standard image of the to-be-processed image. As shown in fig. 11, a target feature point ABCD that matches a feature point ABCD (vertex of a target object) can be determined in an image to be processed by a feature matching technique. As shown in fig. 12, fig. 12(1) represents an image to be processed obtained by shooting a target object from a non-frontal angle by the image capturing device, and fig. 12(2) represents a standard graph of the target object, and a target feature point ABCD matched with the feature point ABCD may be determined in the image to be processed by a feature matching technique. It should be understood that the standard diagram is only illustrated as an example of the 4-sided polygon, but the disclosure does not limit the same.

In step S014, the target feature points are connected in sequence, and an image of the target object is determined in the image to be processed.

In some embodiments, in fig. 11(1), the target feature points ABCD are connected in the order of the vertex feature points ABCD in fig. 11(2) to separate the image of the target object from the image to be processed.

It should be understood that the number of edges of the image of the target object segmented in the first to-be-processed image and the standard graph should be the same.

FIG. 13 illustrates a shape of an image of a target object separated from an image to be processed, according to an example embodiment. As shown in fig. 13, the upper part of fig. 13 shows images to be processed obtained by capturing a target object from various angles, and the lower part of fig. 13 shows the shape of an image of the target object separated from each image to be processed. It is clear that different shapes can be obtained by acquiring the target object from different angles.

Step S015, determining the first image to be tiled according to the image of the target object.

In some embodiments, the image of the target object may be the first image to be tiled, or may be a part of the first image to be tiled (for example, the first image to be tiled includes not only the image of the target object but also some set backgrounds), or may be a part of the image of the target object (for example, the target object is a human face, and the first image to be tiled should be an eye part in the image of the human face). It should be understood that the present disclosure does not limit the content of the first image to be tiled, and the skilled person can obtain the first image to be tiled according to his own needs.

It should be noted that, if the first image to be tiled is the image of the target object determined according to the standard graph, the number of edges of the standard graph should also be N; if the first image to be tiled is not an image of the target object, the present disclosure does not limit the number of edges of the standard graph.

In this embodiment, a standard graph of a target object is determined by an image matching technique, and an image of the target object is determined from an image to be processed by vertex feature points of the standard graph. Compared with an image segmentation method, the method can accurately segment the image of the target object from the image to be processed with unobvious background.

Fig. 14 is a flowchart of step S12 in fig. 10 in an exemplary embodiment. Referring to fig. 14, the above-mentioned step S14 may include the following steps.

In step S0121, a feature extraction process is performed on the image to be processed to obtain feature points of the image to be processed.

In some embodiments, feature extraction algorithms may be used to extract Features from the image to be processed, such as Scale-invariant Features transform (SIFT), speedup Robust Features (speedup Robust Features), and the like. Some main features can be extracted from the image to be processed by the image feature extraction algorithm.

In step S0122, a standard graph matching the image to be processed is determined according to the feature points of the image to be processed.

In some embodiments, a standard map matching the image to be processed may be determined in the standard map library by the features of the image to be processed. It can be understood that the standard graphs in the standard graph library each include a plurality of feature points, and the feature points of the standard graph and the feature points of the image to be processed may be matched by an image matching algorithm such as SIFT, so as to determine the standard graph of the image to be processed.

According to the embodiment, the standard graph of the target object can be accurately determined in the standard graph library through the image matching technology.

Fig. 15 is a flowchart of step S14 in fig. 10 in an exemplary embodiment. Referring to fig. 15, the above-described step S14 may include the following steps.

In step S0141, an image to be processed including a target object is acquired.

In step S0142, a standard graph matching the target object is determined, the standard graph including vertex feature points.

In step S0143, feature points matching with the plurality of vertices of the standard graph are connected in sequence to determine a target object in the image to be processed.

In some embodiments, the image of the target object may be the first image to be tiled, or may be a part of the first image to be tiled (for example, the first image to be tiled includes not only the image of the target object but also some set backgrounds), or may be a part of the image of the target object (for example, the target object is a human face, and the first image to be tiled should be an eye part in the image of the human face). It should be understood that the present disclosure does not limit the content of the first image to be tiled, and the skilled person can obtain the first image to be tiled according to his own needs.

In step S0144, a background is added to the image of the target object to generate the first image to be tiled, the target object being at a preset position of the first image to be tiled.

In some embodiments, the first image to be tiled may be an image that includes both the target object and the background image. For example, if it is desired to map next to the target object image in a scene, it is necessary to add a background around the target object. It is to be understood that the present disclosure is not limited to color, size, kind, transparency, etc. of the background.

In some embodiments, the image shown in FIG. 15A may be rendered using the image processing methods described above. FIG. 15A is a rendered schematic diagram of an image, according to an example embodiment.

As shown in fig. 15A, if the map position of the texture image 1503 is not limited to only the target object image 1501, the background 1502 needs to be added to the target object 1501. Of course, it is also generally desirable to add a transparent background to the textured image 1503 to enable the textured image 1503 to be affixed in place.

In some embodiments, the preset position in this embodiment may be set according to actual requirements. For example, the target object image may be set at a middle position, a left position, a right position, or the like of the first image to be tiled, which is not limited by the present disclosure.

According to the technical scheme provided by the embodiment, the image of the target object can be processed to obtain the image to be pasted, which is required by a technician.

Fig. 16 is a flowchart of step S01 in fig. 5 in an exemplary embodiment. Referring to fig. 16, the above-mentioned step S01 may include the following steps.

In step S015, an image to be processed including the target object is acquired.

In step S016, an image of a target object is determined in the image to be processed by an image recognition technique to determine the first image to be tiled.

According to the technical scheme provided by the embodiment, the image of the target object is determined from the image to be processed through an image recognition technology (for example, edge recognition processing is carried out on a gray scale image by using an image segmentation technology). Compared with the method that the image of the target object is determined in the image to be processed through the label graph, the method saves CPU operation resources and improves the image recognition speed.

FIG. 17 is a flowchart of step S02 of FIG. 5 in an exemplary embodiment.

In some embodiments, the first n-polygon may be a triangle, the first image to be tiled includes a target vertex, and the target vertex may be any vertex of the first image to be tiled, which is not limited by the present disclosure.

Referring to fig. 17, the above-mentioned step S02 may include the following steps.

In step S021, each side of the first image to be tiled is divided into m equal parts, so as to obtain m equal division points on each side, where m is a positive integer greater than 1.

In some embodiments, the image to be tiled may be a triangle, a quadrilateral, a pentagon.

As shown in fig. 18, if the image to be tiled is a trilateral, each side of the trilateral may be divided into m equal parts by the method shown in fig. 18(a) (where m is 1, 2, 3.. said.), and if the image to be tiled is a pentagon, each side of the pentagon may be divided into m equal parts by the method shown in fig. 18(b) (where m is 1, 2, 3.. said.). It is to be understood that the present disclosure is not limited to image segmentation methods.

In step S022, m-points on respective sides of the first image to be tiled are connected to the target vertices to obtain the first n-polygon.

As shown in fig. 18, after m-equally dividing each side of the first image to be tiled, the m-equally divided points on each side of the first image to be tiled may be connected to the target vertex to obtain the first n-polygon.

In some embodiments, the target vertex may be any vertex of the first image to be tiled, which is not limited by the present disclosure.

It should be understood that any method that can segment the first image to be tiled into the first n-polygon is within the scope of the present disclosure.

It should be noted that, in the present disclosure, the same segmentation method needs to be applied to the first image to be tiled and the first texture image, so that the target vertex in the first image to be tiled and the target vertex in the first texture image should also correspond to each other.

The embodiment provides an image segmentation method, which can ensure that a first image to be tiled is segmented into any number of first n-polygons, and also ensures that the method can optimize an image mapping technology at any time for hardware devices of different specifications because the number of the first n-polygons is controllable (i.e., the number of the first n-polygons is determined for different hardware devices, so that rendering effects of pictures on different devices are the same).

FIG. 19 is a flowchart of step S02 of FIG. 5 in an exemplary embodiment.

In some embodiments, the first n-polygon is a first triangle and the first image to be tiled is a quadrilateral, the quadrilateral including a target opposite side.

Referring to fig. 19, the above-described step S02 may include the following steps.

In step S023, m equal divisions are performed on the target opposite sides of the first image to be tiled respectively to obtain m equal divisions on the target opposite sides, where m is a positive integer greater than 1.

As shown in fig. 20(a), a certain pair of sides (e.g., AB and CD) of the first image to be tiled ABCD may be m-divided to obtain an image as shown in fig. 20 (b).

In step S024, m equally divided points on the target opposite sides of the first image to be tiled are correspondingly connected to obtain a plurality of target quadrangles.

As shown in fig. 20(c), the m-equally-divided points on the opposite sides AB and CD may be correspondingly connected to obtain a corresponding target quadrangle. It should be understood that the target quadrangle generated during the segmentation of the first image to be tiled may be an approximate parallelogram, which is not limited by the present disclosure.

In step S025, the diagonals of the plurality of target quadrilaterals are connected to obtain the first triangle.

In some embodiments, one diagonal of the target parallelogram may be connected, or both diagonals may be connected, and the skilled person may set the connection according to actual requirements, which is not limited by the present disclosure.

The embodiment provides an image segmentation method, which can segment a first image to be tiled (quadrangle) into any number of first n-polygons, and ensures that the method can optimize an image mapping technology at any time for hardware devices of different specifications because the number of the first n-polygons is controllable (that is, the number of the first n-polygons is determined for different hardware devices, so that rendering effects of pictures on different devices are the same).

FIG. 21 is a method of image processing in accordance with an example embodiment. Referring to fig. 21, the above-described image processing method may include the following steps.

In some embodiments, not only the texture image but also the texture video may be mapped to the first image to be tiled.

In step S05, a second texture image is obtained, where the second texture image and the first texture image are both from a target texture video.

In some embodiments, if the target texture image is to be video-mapped to the first image to be tiled, the target texture image may be temporally disassembled into images of one frame, and the images of the one frame may be temporally sequentially mapped to the first image to be tiled.

That is, after mapping the first target texture image to the first image to be tiled, the next frame image (which may be assumed to be the second texture image) may be acquired from the target texture video. It is to be understood that the next frame image of the first target texture image may be directly obtained from the original target texture video, or the target texture video may be sampled (for example, at a certain frame rate) to obtain the next frame image, which is not limited by the present disclosure.

In step S06, a segmentation process is performed on the second texture image to obtain a third n-polygon of the second texture image, where the first n-polygon and the third n-polygon have a second target mapping relationship.

In some embodiments, the second texture image may be segmented according to the segmentation method of the first image to be tiled to obtain the third n-polygon. The number of the third n-polygon sides is the same as that of the first n-polygon sides, and the third n-polygon sides and the first n-polygon sides have a second target mapping relation.

In step S07, each third n-polygon is mapped to a corresponding first n-polygon according to the second target mapping relationship, so as to perform video texture mapping processing on the first image to be tiled.

It is understood that, after the second texture image is pasted, the next frame image in the target texture video may be continuously acquired, so as to continuously complete the pasting operation of the target texture video.

The technical scheme provided by the embodiment can be used for mapping the target texture video to the image to be tiled, and the finally obtained visual effect is as if the target object in the image to be tiled moves.

FIG. 22 illustrates an image processing method according to an exemplary embodiment. Referring to fig. 22, the above-described image processing method may include the following steps.

In some embodiments, not only the target texture video may be mapped onto the first image to be tiled, but also the target texture video may be mapped onto the video to be tiled. For example, when the user scans the target object through the mobile device, the user can see not only the target object but also the target texture video attached to the target object image in real time through the mobile device.

Referring to fig. 22, the above-described image processing method may include the following steps.

In step S08, a second image to be tiled is obtained, where the first image to be tiled and the second image to be tiled are both obtained by scanning of the target device, and the second image to be tiled is a next frame image of the first image to be tiled.

In some embodiments, when the user scans the target object through the mobile device, since the mobile device scans the target object at the target frame rate, the picture seen by the user through the mobile device may be considered as the video to be tiled.

In some embodiments, the first image to be tiled and the second image to be tiled may both be obtained by the mobile device scanning the target object, wherein the first image to be tiled may be an image of a previous frame of the second image to be tiled.

In step S09, a second texture image is obtained, where the second texture image and the first texture image are both from the target texture video, and the second texture image is a next frame image of the first texture image.

In some embodiments, the target texture video may be sampled according to a target frame rate of the mobile device, so as to temporally paste a sampled frame image onto a frame image obtained by scanning of the mobile device. That is, assuming that the mobile device obtains 30 frames of images per second, and there are 30 frames of images per second after sampling the target texture video, the 30 frames of images of the target texture video can be sequentially mapped onto 30 images obtained by scanning the mobile device according to the time sequence.

In step S10, the second texture image is mapped to the second image to be tiled according to any of the image processing methods described above.

In some embodiments, after mapping the first target texture image to the first image to be tiled, the second texture image may be mapped into the second image to be tiled according to the image processing method provided by the embodiment of the present disclosure.

In step S11, the mapped first image to be tiled and second image to be tiled are sequentially rendered into the target screen for display by the target rendering frame, so as to perform video texture mapping processing on the dynamic image acquired by the target device.

In some embodiments, after the mapping is completed for the first tile image or the second tile image, the first image to be tiled may be rendered into the screen for display using the target rendering framework.

FIG. 23 is a flowchart illustrating step S08 of FIG. 22 in an exemplary embodiment. As shown in fig. 23, the above-described image processing method may include the following steps.

In the method shown in fig. 22, the coordinate values of the vertex of the target object in the image to be processed acquired by the mobile device at different times are different due to the setting of the mobile device. Generally, the target vertex can be determined from the image to be processed according to the standard graph, but this method needs to determine the vertex of the target object in the image to be processed according to the standard graph each time, and the method occupies a large memory and takes a long time. Therefore, a vertex identification method with low time complexity and less memory occupation is of great importance to the scheme.

Since the image to be processed obtained at the present moment and the image to be processed obtained at the previous moment are essentially the same item, it is not necessary to redetermine the standard graph of the target object each time, and determine the vertex of the target object according to the standard graph.

The embodiment provides the following method for quickly and accurately determining the vertex of the target object in the image to be processed.

In step S081, the target object position is scanned by the target device to acquire the current-time to-be-processed image.

In step S082, an image to be processed at the previous moment is acquired, and the image to be processed at the previous moment includes a plurality of target vertexes of the target object.

In some embodiments, the target vertex information (e.g., position coordinate information) in the image to be processed acquired at the previous time is generally known.

In step S083, a plurality of target vertices of the target object at the previous time are processed to obtain a plurality of target vertices of the target object in the image to be processed at the current time.

In some embodiments, the target vertices of the target object in the image to be processed obtained at the previous time may be processed by an optical flow tracking method with lower temporal complexity to determine a plurality of target vertices of the target object in the image to be processed at the current time.

The optical flow tracking algorithm is an algorithm with low time complexity, and has high application value for application scenes with high real-time requirements.

The optical flow tracking algorithm is briefly described below.

Firstly, according to the general assumption of optical flow tracking, that is, the pixel brightness of the feature point does not change with time, the conversion to computer vision is that the gray value of the feature point does not change with time. Then the equation is obtained:

I(x+Δx,y+Δy,t+Δt)=I(x,y,t) (7)

where I (x, y, t) represents the gray scale value at the (x, y) location at time t, and Δ x, Δ y, Δ t represents the offset of x, y, t.

Left Taylor expansion, eliminating the repeated factor and dividing by the time increment can obtain equations (8) - (10) in turn:

i is the light intensity at time (x, y) t, or the grey value. And Vx and Vy are moving speeds of (x, y) points on the corresponding images on the x axis and the y axis, Ix, Iy and It are partial derivatives of I on each axis, and the formula (10) can be obtained after the formula (10) is processed:

IxVx+IyVy=-It (11)

this equation is called the optical flow equation, but because of insufficient conditions, it is not a definite solution.

The optical flow tracking method assumes that the image content displacement of two adjacent frames is small and that the displacement is approximately constant within the neighborhood of the point of interest p. Therefore, it can be assumed that the optical flow equation holds for all pixels within a window centered at the p-point. Applying the optical flow equation to a region near the feature points to obtain enough equations, and writing the equations into a matrix form to obtain:

the unknown variable number of the equation is less than the number of the equation, so the equation is solved, and a least square method (least square method) can be used for obtaining approximate solution to obtain values of Vx and Vy, namely a two-dimensional displacement vector of the characteristic point (x, y) at the time t.

With a proper amount of two-dimensional displacement, the target vertex information of the target object at the current moment can be determined according to the plurality of target vertices of the image to be processed at the previous moment.

In step S084, the second image to be tiled is determined in the image to be processed at the current time according to the plurality of target vertices of the target object at the current time.

In some real-time, the second image to be tiled may be determined from the target vertex of the target object in the image to be processed at the current time.

FIG. 24 is an illustration of an image processing system in accordance with an exemplary embodiment. As shown in fig. 24, after obtaining the first frame of image to be subjected to tile attachment issued by the picture recognition service, the system does not need to use a standard graph to perform vertex determination processing on the next image to be subjected to tile attachment, only needs to use an optical flow tracking algorithm to process the vertex of the previous frame of image, so as to obtain vertex information in the current frame, and then can perform image segmentation processing according to the vertex of the current frame (for example, parallelogram) to attach the texture image to the image to be subjected to tile attachment. Compared with the method for determining the vertex information of each frame of image according to the standard image, the method for determining the vertex information of the next frame of image greatly improves the mapping speed and saves CPU (Central processing Unit) operation resources.

FIG. 25 illustrates an image processing system according to an exemplary embodiment. Referring to fig. 25, in this system, a target object 2501 scans a painting through a target device 2502 to obtain an image to be processed (it is understood that the image to be processed includes not only the target object but also background information); the target device 2502 uploads the image to be processed to the server 2503 so that the server determines a target object image from the image to be processed; the server 2503 returns the texture map and the vertex position information in the image to be tiled and the texture map to the target device 2502; the target device 2502 directly maps the texture image onto the image to be tiled through the image processing method provided by the present disclosure, and dyes the image to be tiled onto the screen of the target device 2502 through the target rendering frame.

Compared with the image processing system shown in fig. 4, the image processing system provided by the embodiment does not need to use a PNP algorithm to obtain the MVP matrix between the texture image and the image to be tiled, so that not only are the steps in the augmented reality scene simplified, but also the CPU operation resources are greatly saved. According to the scheme, most of GPU operation loss (namely CPU loss caused by determining the MVP matrix) can be transferred to GPU rendering loss, so that the phenomenon of frame dropping or frame dragging caused by too few CPU resources in the process of image on-screen rendering is avoided.

Fig. 26 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment. Referring to fig. 26, an image processing apparatus 2600 provided in an embodiment of the present disclosure may include: an image acquisition module 2601, a first segmentation module 2602, a second segmentation module 2603, and a chartlet module 2603.

The image obtaining module 2601 may be configured to obtain a first image to be tiled and a first texture image, where the first image to be tiled and the first texture image are both N-polygons, and N is a positive integer greater than 2; the first segmentation module 2602 may be configured to perform segmentation processing on the first image to be tiled to obtain a first n-polygon of the first image to be tiled, n being a positive integer greater than 2; the second segmentation module 2603 may be configured to perform a segmentation process on the first texture image to obtain a second n-polygon of the first texture image, the first n-polygon and the second n-polygon having a first target mapping relationship; the mapping module 2604 may be configured to map each second n-polygon to a corresponding first n-polygon according to the first target mapping relationship, so as to perform texture mapping on the first image to be tiled.

In some embodiments, the first n-polygon is a first triangle, the first image to be tiled includes a target vertex, wherein the first segmentation module 2602 may include: a first dividing unit and a first connecting unit.

The first dividing unit may be configured to divide each side of the first image to be tiled by m to obtain m division points on each side, where m is a positive integer greater than 1; the first connecting unit may be configured to connect m-points on respective sides of the first image to be tiled with the target vertex to obtain the first n-polygon.

In some embodiments, the first n-polygon is a first triangle and the first image to be tiled is a quadrilateral, the quadrilateral including a target opposite side. Wherein the first dividing unit may further include: a second dividing unit, a second connecting unit and a diagonal connecting unit.

The second dividing unit may be configured to divide the target opposite sides of the first image to be tiled by m, respectively, to obtain m division points on the target opposite sides, where m is a positive integer greater than 1; the second connecting unit may be configured to correspondingly connect m equally divided points on the target opposite sides of the first image to be tiled to obtain a plurality of target quadrangles; the diagonal connection unit may be configured to connect the diagonals of the plurality of target quadrilaterals to obtain the first triangle.

In some embodiments, the image acquisition module 2601 may comprise: the device comprises an image acquisition unit, a standard diagram determining unit, a target characteristic point determining unit, a first sequential connection unit and a first image to be pasted determining unit.

Wherein the image acquisition unit may be configured to acquire an image to be processed including a target object; the standard graph determining unit may be configured to determine a standard graph matching the target object, the standard graph including vertex feature points; the target feature point determining unit may be configured to determine a target feature point matching a vertex feature point of the standard graph in the image to be processed; the first in-sequence connecting unit may be configured to connect the target feature points in sequence, and determine an image of the target object in the image to be processed; the first image to be tiled determination unit may be configured to determine the first image to be tiled from the image of the target object.

In some embodiments, the standard graph determining unit may include: a feature extraction subunit and a feature matching subunit.

The feature extraction self-unit can be configured to perform feature extraction processing on the image to be processed to obtain feature points of the image to be processed; the feature matching subunit may be configured to determine a standard graph matched with the image to be processed according to the feature points of the image to be processed.

In some embodiments, the first image to be tiled determination unit may include: background add subunits.

Wherein the background adding subunit may be configured to add a background to the target object to generate the first image to be tiled, the target object being at a preset position of the first image to be tiled.

In some embodiments, the image acquisition module 2601 may further comprise: the device comprises an image acquisition unit to be processed, an image recognition unit and a first image determining unit to be pasted.

Wherein the to-be-processed image acquiring unit may be configured to acquire an image to be processed including a target object; the image recognition unit may be configured to determine an image of the target object in the image to be processed by an image recognition technique; the first image to be tiled determination unit may be configured to determine the first image to be tiled from the image of the target object.

In some embodiments, the image processing device 2600 may further include an image rendering device, wherein the image rendering device may be configured to render the mapped first n-polygon through a target rendering frame for display on a target screen.

In some embodiments, the image processing apparatus 2600 may further include: the texture mapping device comprises a second texture image acquisition module, a texture image segmentation module and a first mapping module.

Wherein the second texture image acquisition module may be configured to acquire a second texture image, the second texture image and the first texture image both being from a target texture video; the texture image segmentation module may be configured to perform segmentation processing on the second texture image to obtain a third n-polygon of the second texture image, the first n-polygon and the third n-polygon having a second target mapping relationship; the first mapping module may be configured to map each third n-polygon to a corresponding first n-polygon according to the second target mapping relationship, so as to perform video texture mapping processing on the first image to be tiled.

In some embodiments, the image processing apparatus 2600 may further include: the system comprises a second image to be tiled obtaining module, a second texture image determining module, a second mapping module and a second rendering module.

The second image to be pasted is acquired by the second image acquiring module, wherein the first image to be pasted and the second image to be pasted are both acquired by target equipment, and the second image to be pasted is a next frame image of the first image to be pasted; the second texture image determination module may be configured to obtain a second texture image, the second texture image and the first texture image both being from a target texture video, the second texture image being a next frame image of the first texture image; the second mapping module may be configured to map the second texture image to the second image to be tiled according to the image processing method described above; the second rendering module may be configured to render the first to-be-tiled image and the second to-be-tiled image after the tiling sequentially to a target screen through a target rendering frame for display, so as to perform video texture mapping processing on the dynamic image acquired by the target device.

In some embodiments, the second image to be tiled acquisition module may include: the system comprises an image acquisition unit, a last moment target vertex acquisition unit, a current moment target vertex acquisition unit, a target object determination unit and a second image acquisition unit to be pasted.

The image acquisition unit can be configured to acquire an image of a target object through the target device and acquire an image to be processed at the current moment; the previous-time target vertex acquisition unit may be configured to acquire a previous-time to-be-processed image acquired by the target device, where the previous-time to-be-processed image includes vertex feature point information of a target object; the current-time target vertex acquisition unit may be configured to process vertex feature point information of a target object to be processed at a previous time, and acquire vertex feature point information of the target object in an image to be processed at a current time; the target object determining unit may be configured to determine an image of the target object at the current time in the image to be processed at the current time according to vertex feature point information of the target object at the current time; the second image to be tiled acquiring unit may be configured to determine the second image to be tiled according to the image of the target object at the current time.

Since each functional module of the image processing apparatus 2600 according to the exemplary embodiment of the present disclosure corresponds to the step of the exemplary embodiment of the image processing method described above, it is not described herein again.

Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution of the embodiment of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computing device (which may be a personal computer, a server, a mobile terminal, or a smart device, etc.) to execute the method according to the embodiment of the present disclosure, such as one or more of the steps shown in fig. 5.

Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.

Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

It is to be understood that the disclosure is not limited to the details of construction, the arrangements of the drawings, or the manner of implementation that have been set forth herein, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

39页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于3D激光雷达的多特征融合IGV定位与建图方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!