Image processing apparatus, method and medium

文档序号:1273101 发布日期:2020-08-25 浏览:7次 中文

阅读说明:本技术 图像处理装置、方法及介质 (Image processing apparatus, method and medium ) 是由 田虎 李斐 于 2019-02-19 设计创作,主要内容包括:公开了一种图像处理装置、方法及介质,所述图像处理装置包括:第一训练单元,其使用有标签输入图像来训练深度网络,以获得所述有标签输入图像的深度图像;第二训练单元,其使用所述有标签的输入图像和所获得的深度图像来训练置信度网络,以获得置信度图像,所述置信度图像指示所述深度图像的估计深度接近真实深度的区域;以及第三训练单元,其使用所述有标签输入图像和无标签输入图像一起训练所述深度网络,其中,通过所述深度图像和所述置信度图像获得所述无标签输入图像的伪真实深度,并且将所述伪真实深度视为所述无标签输入图像的真实深度。(Disclosed are an image processing apparatus, method, and medium, the image processing apparatus including: a first training unit that trains a depth network using a labeled input image to obtain a depth image of the labeled input image; a second training unit to train a confidence network using the tagged input image and the obtained depth image to obtain a confidence image indicating a region where the estimated depth of the depth image is close to true depth; and a third training unit that trains the depth network using the labeled input image and the unlabeled input image together, wherein a pseudo-true depth of the unlabeled input image is obtained from the depth image and the confidence image, and the pseudo-true depth is regarded as a true depth of the unlabeled input image.)

1. An image processing apparatus comprising:

a first training unit that trains a depth network using a labeled input image to obtain a depth image of the labeled input image;

a second training unit to train a confidence network using the tagged input image and the obtained depth image to obtain a confidence image indicating a region where the estimated depth of the depth image is close to true depth; and

a third training unit that trains the depth network using the labeled and unlabeled input images together, wherein a pseudo-true depth of the unlabeled input image is obtained from the depth image and the confidence image, and the pseudo-true depth is regarded as a true depth of the unlabeled input image.

2. The apparatus of claim 1, wherein the first training unit trains the depth network by optimizing a distance between an estimated depth and a true depth of the depth image for pixels of the tagged input image.

3. The apparatus of claim 1, wherein the second training unit trains the confidence network by optimizing a distance between an estimated depth of the depth image and a true depth for pixels of the labeled input image.

4. The apparatus of claim 3, wherein the confidence image of the confidence network is obtained from a relative error between an estimated depth and a true depth of the depth image.

5. The apparatus according to claim 1 or 4, wherein a confidence image of the label-free input image is predicted by the confidence network.

6. The apparatus of claim 5, wherein, for pixels of the unlabeled input image, estimated depths of regions in a confidence image of the unlabeled input image that are greater than a predetermined threshold are taken as pseudo-true depths of the unlabeled input image.

7. The apparatus of claim 1, wherein the third training unit optimizes a distance between an estimated depth and a true depth of the depth images of the labeled input image and the unlabeled input image simultaneously.

8. The apparatus of claim 1, wherein the tagged input image and the untagged input image are a single color image.

9. An image processing method comprising:

training a depth network using a tagged input image to obtain a depth image of the tagged input image;

training a confidence network using the tagged input image and the obtained depth image to obtain a confidence image indicating a region of the depth image whose estimated depth is close to true depth; and

training the depth network using the labeled and unlabeled input images together, wherein a pseudo-true depth of the unlabeled input image is obtained from the depth image and the confidence image, and the pseudo-true depth is considered to be a true depth of the unlabeled input image.

10. A machine-readable storage medium having a program product embodied thereon, the program product comprising machine-readable instruction code stored therein, wherein the instruction code, when read and executed by a computer, is capable of causing the computer to perform the method of claim 9.

Technical Field

The present disclosure relates to the field of computer vision, and in particular, to an image processing apparatus and method for performing depth estimation from a single image.

Background

This section provides background information related to the present disclosure, which is not necessarily prior art.

Depth estimation from a single image is a very important issue in the field of computer vision, and the purpose of the depth estimation is to assign a depth to each pixel point in the image. If the depth information of the image can be accurately estimated, the spatial position information between objects in the scene can be obtained, which can be helpful for scene understanding and three-dimensional reconstruction.

The estimation of depth is usually achieved by a supervised learning method, that is, an image and its corresponding true depth map are required to train the model. Convolutional neural networks are a very efficient model for supervised learning. Over the years, convolutional neural network based methods have greatly improved the accuracy of depth estimation. However, training these deep networks requires a large number of labeled samples. Even though some consumer-grade cameras, such as Kinect, can be used to directly acquire the true depth of the scene, it still requires a lot of manpower and time.

Disclosure of Invention

This section provides a general summary of the disclosure, and is not a comprehensive disclosure of its full scope or all of its features.

In order to solve the problem that the cost for acquiring real depth data is high, the semi-supervised depth estimation scheme based on confidence degree learning is provided. The semi-supervised depth estimation scheme according to the present disclosure first trains a confidence model through data with true depth, the model can predict whether the input depth is an accurate confidence, and the higher the confidence of a position in the output confidence map indicates that the depth estimation at the position is closer to the true depth. Then, for data having no real depth, the confidence of the estimated depth thereof is predicted by the confidence model, and the estimated depth at a position with higher confidence is selected on the depth map as a pseudo-real depth. Finally, in the next iteration, these blocks of data with pseudo-true depths and data with true depths train the network of depth estimates. Compared with a completely supervised scheme, the semi-supervised scheme according to the present disclosure can obtain better performance on the premise of the same amount of data with real depth, thereby alleviating the need for a large amount of real depth to some extent.

According to an aspect of the present disclosure, there is provided an image processing apparatus including: a first training unit that trains a depth network using a labeled input image to obtain a depth image of the labeled input image; a second training unit to train a confidence network using the tagged input image and the obtained depth image to obtain a confidence image indicating a region where the estimated depth of the depth image is close to true depth; and a third training unit that trains the depth network using the labeled input image and the unlabeled input image together, wherein a pseudo-true depth of the unlabeled input image is obtained from the depth image and the confidence image, and the pseudo-true depth is regarded as a true depth of the unlabeled input image.

According to another aspect of the present disclosure, there is provided an image processing method including: training a depth network using a tagged input image to obtain a depth image of the tagged input image; training a confidence network using the tagged input image and the obtained depth image to obtain a confidence image indicating a region of the depth image whose estimated depth is close to true depth; and training the depth network using the labeled input image and the unlabeled input image together, wherein a pseudo-true depth of the unlabeled input image is obtained from the depth image and the confidence image, and the pseudo-true depth is considered to be a true depth of the unlabeled input image.

According to another aspect of the present disclosure, there is provided a program product comprising machine-readable instruction code stored therein, wherein the instruction code, when read and executed by a computer, is capable of causing the computer to perform an image processing method according to the present disclosure.

According to another aspect of the present disclosure, a machine-readable storage medium is provided, having embodied thereon a program product according to the present disclosure.

Confidence of the estimated depth can be predicted by using the semi-supervised method based on confidence learning according to the present disclosure, and a reliable region of unmarked data can be obtained. And, these unlabeled data are used along with the labeled data to train the depth estimation model. Compared to a fully supervised approach, a semi supervised approach according to the present disclosure may result in better performance, thereby reducing the need for a large amount of labeled training data.

Further areas of applicability will become apparent from the description provided herein. The description and specific examples in this summary are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.

Drawings

The drawings described herein are for illustrative purposes only of selected embodiments and not all possible implementations, and are not intended to limit the scope of the present disclosure. In the drawings:

fig. 1 is a block diagram of an image processing apparatus according to one embodiment of the present disclosure;

FIG. 2 is a system framework for confidence model training according to one embodiment of the present disclosure;

FIG. 3 is a flow diagram of an image processing method according to one embodiment of the present disclosure; and

fig. 4 is a block diagram of an exemplary structure of a general-purpose personal computer in which the image processing apparatus and method according to the embodiment of the present disclosure can be implemented.

While the disclosure is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the disclosure. It is noted that throughout the several views, corresponding reference numerals indicate corresponding parts.

Detailed Description

Examples of the present disclosure will now be described more fully with reference to the accompanying drawings. The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses.

Example embodiments are provided so that this disclosure will be thorough, and will fully convey the scope to those skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms, and that neither should be construed to limit the scope of the disclosure. In certain example embodiments, well-known processes, well-known structures, and well-known technologies are not described in detail.

In order to solve the problem that the cost for acquiring real depth data is high, the semi-supervised depth estimation method based on confidence degree learning is provided. Semi-supervised refers to training image data in which a portion of the image has true depth and a portion of the image does not.

The general idea of the depth estimation method according to the present disclosure is: firstly, training a confidence coefficient model through data with real depth, wherein the confidence coefficient model can predict whether the input depth is accurate confidence coefficient, and the higher the confidence coefficient of a position in an output confidence map indicates that the depth estimation on the position is closer to the real depth; then, for data without real depth, predicting the confidence of the estimated depth through the confidence model, and selecting the estimated depth at the position with higher confidence on the depth map as the pseudo-real depth without real depth data; finally, in the next iteration, these pseudo-true depth data and true depth data are used together to train the depth estimation network.

Compared with a completely supervised method, on the premise of the same data volume with real depth, the semi-supervised depth estimation method disclosed by the invention can obtain better performance, so that the requirement on a large number of real depths is relieved to a certain extent.

According to an embodiment of the present disclosure, there is provided an image processing apparatus. The image processing apparatus includes: a first training unit that trains a depth network using a labeled input image to obtain a depth image of the labeled input image; a second training unit to train a confidence network using the tagged input image and the obtained depth image to obtain a confidence image indicating a region where the estimated depth of the depth image is close to true depth; and a third training unit that trains the depth network using the labeled input image and the unlabeled input image together, wherein a pseudo-true depth of the unlabeled input image is obtained from the depth image and the confidence image, and the pseudo-true depth is regarded as a true depth of the unlabeled input image.

As illustrated in fig. 1, an image processing apparatus 100 according to the present disclosure may include a first training unit 101, a second training unit 102, and a third training unit 103.

The first training unit 101 may train the depth network using the labeled input images (images with real depth) to obtain estimated depth images of the labeled input images. Next, the second training unit 102 may train a confidence network using the labeled input image and the obtained estimated depth image to obtain a confidence image indicating a region where the estimated depth of the estimated depth image is close to the true depth. Then, the third training unit 103 may train the depth network using the labeled input image and the unlabeled input image together, wherein a pseudo-true depth of the unlabeled input image is obtained from the estimated depth image and the confidence image, and the pseudo-true depth is considered as a true depth of the unlabeled input image.

As shown in fig. 2, an image processing apparatus according to the present disclosure may be used to train two networks: a depth network and a confidence network. The input to the depth network may be, for example, a color image or a grayscale image, and the output is an estimated depth image. The input to the confidence network may be a color image and an estimated depth image, and the output is a confidence image. In a confidence image, the confidence at each location (e.g., a value between 0 and 1) represents how close the estimated depth at that location in the estimated depth image is to its true depth. The higher the confidence, the closer the estimated depth is to the true depth, whereas the lower the confidence, the less close the estimated depth is inaccurate. In other words, the confidence image may be used as a kind of supervised information, and for an image without true depth, a position where the depth is closer to its true depth on its estimated depth image output by the depth network may be obtained. The estimated depths at these locations can then be treated as false true depths without true depth images (i.e., these false true depths are treated as the true depths without true depth images), thereby adding training samples with true depths to the depth estimates.

According to an embodiment of the present disclosure, the first training unit 101 may train the depth network by optimizing a distance between an estimated depth of the depth image and a true depth for pixels of the labeled input image.

In other words, the first training unit 101 employs supervised learning, i.e. training the depth network for the labeled (with real depth) input images. The training goal of the depth network is to have its output depth equal to the true depth of the input image.

For example, according to one embodiment of the present disclosure, the loss function of supervised learning of the first training unit 101Can be expressed as the euclidean distance between the estimated depth and the true depth:

wherein n represents the nth input image, p represents the position of the pixel point, DnRepresenting the nth input image InTrue depth of (E)nRepresenting the nth input image InThe estimated depth of (2). Here, it should be apparent to those skilled in the art that the definition of the supervised learning penalty function is merely exemplary, and the present disclosure is not limited thereto.

Then, according to an embodiment of the present disclosure, the second training unit 102 may train the confidence network by optimizing a distance between an estimated depth of the depth image and a true depth for pixels of the tagged input image.

Here, the training of the confidence network is also supervised learning. The confidence level output by the confidence level network can reflect the accuracy of the estimated depth output by the depth network. Higher confidence corresponds to higher depth accuracy of the estimate, whereas lower confidence corresponds to lower depth accuracy of the estimate. To achieve this goal, the true confidence Y may be represented by the relative error between the estimated depth and the true depth:

where α is a constant, p denotes the position of the pixel, n denotes the nth input image, DnRepresenting the nth input image InTrue depth of (E)nRepresenting the nth input image InThe estimated depth of (2). Here, it should be clear to those skilled in the art that the representation of true confidence is merely exemplary and the present disclosure is not limited to equation (2).

According to one embodiment of the present disclosure, the confidence level (e.g., a value between 0 and 1) at a pixel point position may represent how close the estimated depth at the pixel point position in the estimated depth image is to its true depth. For example, the true confidence Y of the nth input image at the p-th pixel pointnIf (p) is 1, it can mean that the estimated depth of the pixel point of the image is equal to the real depth. In other words, according to the present embodiment, the closer the confidence on the pixel point position is to 1 indicates that the estimated depth on the pixel point position is closer to its true depth. In addition, it should be clear to those skilled in the art. According to one embodiment of the present disclosure, a confidence image of the confidence network may be obtained from a relative error between an estimated depth and a true depth of the depth image. Other tables such as absolute error may also be utilized in accordance with other embodiments of the present disclosureIn this way, two conditions are generally satisfied: the confidence coefficient is between 0 and 1; and the confidence degree and the error have an inverse relation, in other words, the smaller the error, the higher the confidence degree. Then, a loss function of the confidence network is trainedCan be expressed as:

where n denotes the nth input image, p denotes the position of the pixel point, and CnA confidence image output for the confidence network. Here, it should be clear to those skilled in the art that the above-described loss function of training the confidence network is merely exemplary, and the present disclosure is not limited thereto.

It is clear to a person skilled in the art that depth estimation usually uses a supervised learning approach, i.e. corresponds to the situation with true depth in the training process. Whereas the depth estimation according to the semi-supervised learning approach of the present disclosure corresponds to the case where there is no real depth in part of the training data.

According to one embodiment of the present disclosure, a confidence image of the unlabeled (not having true depth) input image may be predicted by the confidence network.

According to the embodiment, the confidence network is trained to enable the confidence of the output of the confidence network to be closer and closer to the true confidence, so that a reliable confidence image of any input estimated depth image (which does not have the true depth image) can be obtained. From the confidence image, it can be determined where the depth in the estimated depth image is accurate, and the estimated depth at these locations will be used as a pseudo-true depth without a true depth image. That is, these false true depths are treated as the true depths without true depth images, thereby increasing the training samples with true depths in the depth estimation.

Next, according to an embodiment of the present disclosure, the third training unit 103 may optimize the distance between the estimated depth and the true depth of the depth image of the labeled input image (image with true depth) and the unlabeled input image (image without true depth) at the same time.

In other words, the present disclosure may employ a semi-supervised (a portion of the training data does not have true depth) approach to training a deep network whose loss function is trained in accordance with one embodiment of the present disclosureMay include training the loss function of supervised trainingAnd loss function of semi-supervised trainingTwo items are as follows:

where λ is a coefficient that balances two terms of weight. The balancing factor can be set by a person skilled in the art based on practical experience.

The semi-supervised training process according to the present disclosure can simultaneously use data with and without real depth for training, thereby expanding training samples for depth estimation.

Since supervised training has been described above, only the differences in semi-supervised training will be described below and the same parts as supervised training will not be described again.

For example, for an input image I without true depthmFirst, its estimated depth image can be obtained through a depth networkThen, a confidence image C can be obtained through the trained confidence networkm. From a confidence mapImage CmA depth image can be obtainedThe mid-depth estimates the exact location. Finally, the estimated depths at these locations are taken as input images I without true depthmThe pseudo-true depth of.

According to one embodiment of the present disclosure, an estimated depth of a region greater than a predetermined threshold in a confidence image of the unlabeled input image may be taken as a pseudo-true depth of the unlabeled input image for a pixel of the unlabeled (not having a true depth) input image.

For example, a confidence threshold T may be set empirically by those skilled in the art, and then a region with a confidence level higher than the threshold T may be obtained and considered as a reliable region, which may be represented as a binary mask:

initialization: b ismWhen 0, mask generation: b ism(Cm>T)=1(5)。

Thus, for an image I without true depthmBy which the results of its mask-based depth estimation can be recordedWherein, in BmIn the position equal to 1, willThe estimated depth at these locations is taken as pseudo-true depth (i.e., as true depth without a true depth image). Then, a loss function based on semi-supervised learningCan be expressed as the Euclidean distance of the mask:

wherein m represents the mth input image, and p represents pixel pointsPosition of (E), EmAn estimated depth output for the depth network. In optimizing the formula (6),can be considered as a constant. For images I without true depthmBased on the estimated result of the maskCan be continuously updated during training. With the continuous training of the depth network and the confidence network, the depth estimation result and the confidence estimation result are more and more accurate, which means thatThe results of (a) will also be more and more accurate, and therefore they need to be continuously updated during training. In other words, recorded in the current iterationIs used in the next round of semi-supervised learning. However, it should be clear to those skilled in the art that the loss function of semi-supervised training is merely exemplary, and the present disclosure is not limited thereto.

According to one embodiment of the present disclosure, the deep network and the confidence network may be implemented by a plurality of networks having a convolution structure. The whole training steps are as follows:

step 1: supervised training is performed, which trains a deep network with data having a true depth, e.g., by optimizing equation (1), and simultaneously trains a confidence network, e.g., by optimizing equation (3). The training process may proceed with N1And (5) performing secondary iteration.

Step 2: recording mask-based pseudo-true depths, wherein for data I without true depthsmFirstly, obtaining the estimated depth map thereof through a depth networkThen obtaining a confidence map C of the system through a confidence networkmAnd finally binarized CmTo obtain a mask BmAnd will beAnd (7) recording.

And step 3: performing semi-supervised training using data with true depth and recorded data with pseudo-true depth, e.g. training a deep network by optimizing equation (4); and the confidence network is trained using data with true depth, for example, by optimizing equation (3). The training process may proceed with N2And (5) performing secondary iteration.

Repeat step 2 and step 3N3Next, the process is carried out.

According to the semi-supervised method disclosed by the invention, data with real depth and data without real depth can be adopted for training at the same time, so that a training sample for depth estimation is enlarged.

An image processing method according to an embodiment of the present disclosure will be described below with reference to fig. 3. As shown in fig. 3, the image processing method according to the embodiment of the present disclosure starts at step S310.

In step S310, a depth network is trained using a labeled input image to obtain a depth image of the labeled input image.

Next, in step S320, a confidence network is trained using the labeled input image and the obtained depth image to obtain a confidence image indicating a region where the estimated depth of the depth image is close to the true depth.

Then, in step S330, the depth network is trained using the labeled input image and the unlabeled input image together, wherein a pseudo-true depth of the unlabeled input image is obtained from the depth image and the confidence image, and the pseudo-true depth is regarded as a true depth of the unlabeled input image.

The image processing method according to one embodiment of the present disclosure further comprises the step of training the depth network by optimizing a distance between an estimated depth of the depth image and a true depth for pixels of the tagged input image.

The image processing method according to one embodiment of the present disclosure further comprises the step of training the confidence network by optimizing a distance between an estimated depth of the depth image and a true depth for pixels of the tagged input image.

The image processing method according to an embodiment of the present disclosure further comprises the step of obtaining a confidence image of the confidence network from a relative error between an estimated depth and a true depth of the depth image.

The image processing method according to an embodiment of the present disclosure further comprises the step of predicting a confidence image of the label-free input image by the confidence network.

The image processing method according to one embodiment of the present disclosure further includes a step of regarding, for a pixel of the label-free input image, an estimated depth of a region greater than a predetermined threshold value in a confidence image of the label-free input image as a pseudo-true depth of the label-free input image.

The image processing method according to an embodiment of the present disclosure further includes a step of simultaneously optimizing a distance between an estimated depth and a true depth of the depth images of the tagged and untagged input images.

An image processing method according to an embodiment of the present disclosure, wherein the labeled input image and the unlabeled input image are single color images.

Various embodiments of the above steps of the image processing method according to the embodiments of the present disclosure have been described in detail above, and a description thereof will not be repeated.

It is apparent that the respective operational procedures of the image processing method according to the present disclosure can be implemented in the form of computer-executable programs stored in various machine-readable storage media.

Moreover, the object of the present disclosure can also be achieved by: a storage medium storing the above executable program code is directly or indirectly supplied to a system or an apparatus, and a computer or a Central Processing Unit (CPU) in the system or the apparatus reads out and executes the program code. At this time, as long as the system or the apparatus has a function of executing a program, the embodiments of the present disclosure are not limited to the program, and the program may also be in any form, for example, an object program, a program executed by an interpreter, a script program provided to an operating system, or the like.

Such machine-readable storage media include, but are not limited to: various memories and storage units, semiconductor devices, magnetic disk units such as optical, magnetic, and magneto-optical disks, and other media suitable for storing information, etc.

In addition, the computer can also implement the technical solution of the present disclosure by connecting to a corresponding website on the internet, downloading and installing the computer program code according to the present disclosure into the computer and then executing the program.

Fig. 4 is a block diagram of an exemplary structure of a general-purpose personal computer 1300 in which the image processing apparatus and method according to the embodiment of the present disclosure can be implemented.

As shown in fig. 4, the CPU 1301 executes various processes in accordance with a program stored in a Read Only Memory (ROM)1302 or a program loaded from a storage section 1308 to a Random Access Memory (RAM) 1303. In the RAM 1303, data necessary when the CPU 1301 executes various processes and the like is also stored as necessary. The CPU 1301, the ROM 1302, and the RAM 1303 are connected to each other via a bus 1304. An input/output interface 1305 is also connected to bus 1304.

The following components are connected to the input/output interface 1305: an input portion 1306 (including a keyboard, a mouse, and the like), an output portion 1307 (including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker, and the like), a storage portion 1308 (including a hard disk, and the like), a communication portion 1309 (including a network interface card such as a LAN card, a modem, and the like). The communication section 1309 performs communication processing via a network such as the internet. A driver 1310 may also be connected to the input/output interface 1305, as desired. A removable medium 1311 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1310 as needed, so that a computer program read out therefrom is installed in the storage portion 1308 as needed.

In the case where the above-described series of processes is realized by software, a program constituting the software is installed from a network such as the internet or a storage medium such as the removable medium 1311.

It should be understood by those skilled in the art that such a storage medium is not limited to the removable medium 1311 shown in fig. 4, in which the program is stored, distributed separately from the apparatus to provide the program to the user. Examples of the removable medium 1311 include a magnetic disk (including a floppy disk (registered trademark)), an optical disk (including a compact disc read only memory (CD-ROM) and a Digital Versatile Disc (DVD)), a magneto-optical disk (including a Mini Disk (MD) (registered trademark)), and a semiconductor memory. Alternatively, the storage medium may be the ROM 1302, a hard disk contained in the storage section 1308, or the like, in which programs are stored and which are distributed to users together with the apparatus containing them.

In the systems and methods of the present disclosure, it is apparent that individual components or steps may be broken down and/or recombined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure. Also, the steps of executing the series of processes described above may naturally be executed chronologically in the order described, but need not necessarily be executed chronologically. Some steps may be performed in parallel or independently of each other.

Although the embodiments of the present disclosure have been described in detail with reference to the accompanying drawings, it should be understood that the above-described embodiments are merely illustrative of the present disclosure and do not constitute a limitation of the present disclosure. It will be apparent to those skilled in the art that various modifications and variations can be made in the above-described embodiments without departing from the spirit and scope of the disclosure. Accordingly, the scope of the disclosure is to be defined only by the claims appended hereto, and by their equivalents.

With respect to the embodiments including the above embodiments, the following remarks are also disclosed:

supplementary note 1. an image processing apparatus, comprising:

a first training unit that trains a depth network using a labeled input image to obtain a depth image of the labeled input image;

a second training unit to train a confidence network using the tagged input image and the obtained depth image to obtain a confidence image indicating a region where the estimated depth of the depth image is close to true depth; and

a third training unit that trains the depth network using the labeled and unlabeled input images together, wherein a pseudo-true depth of the unlabeled input image is obtained from the depth image and the confidence image, and the pseudo-true depth is regarded as a true depth of the unlabeled input image.

Supplementary note 2. the apparatus according to supplementary note 1, wherein the first training unit trains the depth network by optimizing a distance between an estimated depth of the depth image and a true depth for pixels of the labeled input image.

Supplementary note 3. the apparatus according to supplementary note 1, wherein the second training unit trains the confidence network by optimizing a distance between an estimated depth of the depth image and a true depth for pixels of the labeled input image.

Supplementary note 4. the apparatus according to supplementary note 3, wherein the confidence image of the confidence network is obtained from a relative error between an estimated depth and a true depth of the depth image.

Supplementary note 5. the apparatus according to supplementary note 1 or supplementary note 4, wherein a confidence image of the label-free input image is predicted by the confidence network.

Supplementary note 6. the apparatus according to supplementary note 5, wherein, for a pixel of the label-free input image, an estimated depth of a region larger than a predetermined threshold value in a confidence image of the label-free input image is taken as a pseudo-true depth of the label-free input image.

Supplementary note 7. the apparatus according to supplementary note 1, wherein the third training unit optimizes the distance between the estimated depth and the true depth of the depth images of the labeled input image and the unlabeled input image simultaneously.

Supplementary note 8 the apparatus according to supplementary note 1, wherein the labeled input image and the unlabeled input image are single color images.

Supplementary note 9. an image processing method, comprising:

training a depth network using a tagged input image to obtain a depth image of the tagged input image;

training a confidence network using the tagged input image and the obtained depth image to obtain a confidence image indicating a region of the depth image whose estimated depth is close to true depth; and

training the depth network using the labeled and unlabeled input images together, wherein a pseudo-true depth of the unlabeled input image is obtained from the depth image and the confidence image, and the pseudo-true depth is considered to be a true depth of the unlabeled input image.

Supplementary notes 10. the method according to supplementary notes 9, wherein the depth network is trained by optimizing the distance between the estimated depth and the true depth of the depth image for pixels of the tagged input image.

Supplementary notes 11. the method according to supplementary notes 9, wherein the confidence network is trained by optimizing the distance between the estimated depth of the depth image and the true depth for pixels of the labeled input image.

Supplementary notes 12. the method according to supplementary notes 11, wherein the confidence image of the confidence network is obtained from the relative error between the estimated depth and the true depth of the depth image.

Supplementary notes 13. the method according to supplementary notes 9 or 12, wherein a confidence image of the label-free input image is predicted by the confidence network.

Supplementary notes 14. the method according to supplementary notes 13, wherein, for a pixel of the label-free input image, an estimated depth of a region in a confidence image of the label-free input image that is greater than a predetermined threshold is taken as a pseudo-true depth of the label-free input image.

Supplementary notes 15. the method of supplementary notes 9, wherein training the depth network using the labeled and unlabeled input images together comprises simultaneously optimizing a distance between an estimated depth and a true depth of the depth images of the labeled and unlabeled input images.

Supplementary note 16 the method of supplementary note 9, wherein the labeled input image and the unlabeled input image are single color images.

Reference 17. a program product comprising machine readable instruction code stored therein, wherein said instruction code, when read and executed by a computer, is capable of causing said computer to perform a method according to any of the references 9-16.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于深度学习的2D图像场景深度预测及语义分割方法和系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!