Method for embedding image in video, and method and device for acquiring plane prediction model

文档序号:1941727 发布日期:2021-12-07 浏览:16次 中文

阅读说明:本技术 视频中嵌入图像的方法、平面预测模型获取方法和装置 (Method for embedding image in video, and method and device for acquiring plane prediction model ) 是由 周芳汝 安山 杨玫 于 2020-09-22 设计创作,主要内容包括:本公开提出一种视频中嵌入图像的方法、平面预测模型获取方法和装置,涉及图像处理领域。其中的方法包括:将一段视频的视频帧图像输入平面预测模型,获取预测的视频帧图像的平面掩码,其中,所述平面预测模型是利用带有平面检测框和平面掩码的标签的训练图像对深度学习模型训练得到的;将欲嵌入图像嵌入到预测的视频帧图像的平面掩码。通过自动寻找各个视频帧图像中广泛存在的平面掩码,将欲嵌入图像嵌入到平面掩码,不仅使图像自动地和自然地融合到视频中,而且使图像更广泛地融合到视频中。(The disclosure provides a method for embedding an image in a video, and a method and a device for acquiring a plane prediction model, and relates to the field of image processing. The method comprises the following steps: inputting a video frame image of a video into a plane prediction model, and acquiring a plane mask of the predicted video frame image, wherein the plane prediction model is obtained by training a deep learning model by using a training image with a label of a plane detection frame and the plane mask; and embedding the image to be embedded into a plane mask of the predicted video frame image. By automatically searching the plane mask which is widely existed in each video frame image and embedding the image to be embedded into the plane mask, the image is automatically and naturally fused into the video, and the image is more widely fused into the video.)

1. A method for embedding an image in a video, comprising:

inputting a video frame image of a video into a plane prediction model, and acquiring a plane mask of the predicted video frame image, wherein the plane prediction model is obtained by training a deep learning model by using a training image with a label of a plane detection frame and the plane mask;

and embedding the image to be embedded into a plane mask of the predicted video frame image.

2. The method of claim 1,

the plane prediction model is obtained by training a deep learning model by using a training image with a label of a plane detection frame and a plane mask and marking information of 4 key points in the plane mask;

inputting a video frame image of a video into a plane prediction model, and acquiring a plane mask of the predicted video frame image and 4 key points in the plane mask;

the embedding of the image to be embedded into the plane mask of the predicted video frame image comprises: and aligning 4 vertexes of the image to be embedded with 4 key points in the plane mask of the predicted video frame image, and embedding the image to be embedded into corresponding position areas of the 4 key points in the plane mask of the predicted video frame image.

3. The method of claim 2, wherein the labeling information of 4 key points in the plane mask in the training image is obtained by:

converting the plane mask of the training image from a pixel coordinate system to a plane coordinate system;

determining a boundary line of a plane mask under a plane coordinate system;

determining an inscribed rectangle of the plane mask in a plane coordinate system based on the boundary line of the plane mask;

the 4 vertices of the inscribed rectangle of the plane mask are converted from the plane coordinate system to the pixel coordinate system.

4. The method of claim 3, wherein converting the plane mask of the training image from the pixel coordinate system to the plane coordinate system comprises:

converting the plane mask of the training image from a pixel coordinate system to a world coordinate system;

the planar mask of the training image is converted from the world coordinate system to the planar coordinate system.

5. The method of claim 3, wherein determining boundary lines of the plane mask under the plane coordinate system comprises:

carrying out edge detection on the plane mask under a plane coordinate system;

carrying out Hough line detection on the plane mask in a plane coordinate system based on the detected edge of the plane mask;

determining a probability that the detected straight line is a boundary line of the plane mask;

based on the probability, one boundary line of the plane mask in the plane coordinate system is determined from the detected straight line.

6. The method of claim 5, wherein determining the probability that the detected line is a boundary line of a plane mask comprises:

and determining the probability that the detected straight line is the boundary line of the plane mask according to the difference information of the symmetrical regions on both sides of the straight line, wherein the probability that the straight line is the boundary line of the plane mask is higher the larger the difference of the symmetrical regions on both sides of the straight line is.

7. The method of claim 5, wherein determining one boundary line of the plane mask under the plane coordinate system from the detected straight line comprises:

selecting a pair of straight lines having a perpendicular relationship and a parallel relationship from the detected straight lines;

under the condition of finding out a straight line pair, determining the straight line with the highest probability and the highest probability in the highest straight line pair as a boundary line of the plane mask in a plane coordinate system;

and under the condition that the straight line pair is not found, determining the straight line with the highest probability as a boundary line of the plane mask in a plane coordinate system.

8. The method of claim 5, wherein determining the boundary line of the plane mask under the plane coordinate system further comprises at least one of:

before edge detection, median filtering is carried out on the plane mask in a plane coordinate system;

after hough line detection, the detected lines are merged based on the slopes of the lines.

9. The method of claim 3, wherein determining the inscribed rectangle of the plane mask in the plane coordinate system comprises:

and determining an inscribed rectangle of the plane mask parallel to the boundary line under a plane coordinate system, wherein the inscribed rectangle comprises a maximum inscribed square.

10. The method of claim 2, wherein the embedding the image to be embedded into the plane mask of the predicted video frame image comprises:

determining a transformation matrix of the plane mask of the image to be embedded to the predicted video frame image according to the mapping relation between 4 vertexes of the image to be embedded and 4 key points in the plane mask of the predicted video frame image;

and based on the transformation matrix, transforming each foreground point of the image to be embedded into a corresponding position area of 4 key points in a plane mask of the predicted video frame image.

11. The method of claim 2,

the deep learning model adopts a loss function determined based on 4 key points in the labeling information and the predicted 4 key points after alignment operation;

the aligning operation of the predicted 4 key points comprises the following steps:

determining a transformation ratio based on 4 key points in the labeling information and the predicted 4 key points;

according to the transformation proportion, carrying out size transformation on the predicted 4 key points;

determining first position transformation information based on 4 key points in the labeling information;

determining second position transformation information based on the predicted 4 keypoints;

and respectively adding the first position transformation information to the predicted 4 key points after size transformation and subtracting the second position transformation information to finish the alignment operation of the predicted 4 key points.

12. The method according to any one of claims 1 to 11,

the deep learning model comprises a regional convolutional neural network;

or, the image to be embedded comprises an enterprise identification image and a product image.

13. A plane prediction model acquisition method is characterized by comprising the following steps:

marking a plane detection box, a plane mask and 4 key points in the plane mask in the training image;

training a deep learning model by using a training image with labels of a plane detection frame and a plane mask and marking information of 4 key points in the plane mask;

and determining the trained deep learning model as a plane prediction model.

14. The method of claim 13, wherein labeling 4 keypoints in a plane mask in a training image comprises:

converting the plane mask of the training image from a pixel coordinate system to a plane coordinate system;

determining a boundary line of a plane mask under a plane coordinate system;

determining an inscribed rectangle of the plane mask in a plane coordinate system based on the boundary line of the plane mask;

the 4 vertices of the inscribed rectangle of the plane mask are converted from the plane coordinate system to the pixel coordinate system.

15. An apparatus for embedding an image in a video, comprising:

a memory; and

a processor coupled to the memory, the processor configured to perform the method of embedding images in video of any of claims 1-12 based on instructions stored in the memory.

16. A plane prediction model acquisition apparatus, comprising:

a memory; and

a processor coupled to the memory, the processor configured to perform the planar prediction model acquisition method of any of claims 13-14 based on instructions stored in the memory.

17. A non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of embedding an image in a video according to any one of claims 1 to 12 or the plane prediction model acquisition method according to any one of claims 13 to 14.

Technical Field

The present disclosure relates to the field of image processing, and in particular, to a method for embedding an image in a video, and a method and an apparatus for obtaining a plane prediction model.

Background

Advertisements in videos are one of the more effective means of promotion.

And (3) inserting the advertisement video into the video, namely selecting a moment from the original video and inserting the prepared advertisement video into the original video. When the advertisement video is inserted, the user can not see the original video completely, and the watching experience of the user is influenced.

The advertisement image is pasted in the corner area of each frame of video image, and when a user watches the original video, an advertisement image is popped up in the corner of a video playing interface. The user can watch the original video while playing the advertisement image, but the popped-up advertisement image may block the key content of the original video, and the fusion of the advertisement image and the video is unnatural.

The method for embedding the advertisement image into the video is characterized in that the advertisement image is embedded into a certain position of a video frame image, and the advertisement image and the video are integrated. In some related art, a video is detected, a target such as a specific object or an existing advertisement is found, and the target is replaced with an advertisement image. There are also related technologies in which an advertisement placement position is marked in one video frame image, the advertisement placement position is tracked by using a feature point matching method for other video frame images, and an advertisement is placed in the tracked position.

The inventor finds that the related technology of embedding advertisement images in videos has more limitation on finding the embedding positions of the advertisement images, so that the suitable embedding positions cannot be found in the videos in many times, for example, a specific replaceable target is difficult to find in the videos, or a pre-marked advertisement implanting position cannot be tracked in the videos, so that the advertisement images are difficult to embed in the videos.

Disclosure of Invention

The embodiment of the disclosure embeds the image to be embedded into the plane mask by automatically searching the plane mask widely existing in each video frame image, so that the image is automatically and naturally fused into the video, and the image is more widely fused into the video. In addition, key points in the plane mask in each video frame image can be automatically searched, the image to be embedded is embedded into the corresponding position area of the key points, and the fusion effect of the image and the video is improved.

Some embodiments of the present disclosure provide a method for embedding an image in a video, including:

inputting a video frame image of a video into a plane prediction model, and acquiring a plane mask of the predicted video frame image, wherein the plane prediction model is obtained by training a deep learning model by using a training image with a label of a plane detection frame and the plane mask;

and embedding the image to be embedded into a plane mask of the predicted video frame image.

In some embodiments, the plane prediction model is obtained by training a deep learning model with a training image having labels of a plane detection box and a plane mask and labeling information of 4 key points in the plane mask; inputting a video frame image of a video into a plane prediction model, and acquiring a plane mask of the predicted video frame image and 4 key points in the plane mask; the embedding of the image to be embedded into the plane mask of the predicted video frame image comprises: and aligning 4 vertexes of the image to be embedded with 4 key points in the plane mask of the predicted video frame image, and embedding the image to be embedded into corresponding position areas of the 4 key points in the plane mask of the predicted video frame image.

In some embodiments, the labeling information of 4 key points in the plane mask in the training image is obtained by the following method:

converting the plane mask of the training image from a pixel coordinate system to a plane coordinate system;

determining a boundary line of a plane mask under a plane coordinate system;

determining an inscribed rectangle of the plane mask in a plane coordinate system based on the boundary line of the plane mask;

the 4 vertices of the inscribed rectangle of the plane mask are converted from the plane coordinate system to the pixel coordinate system.

In some embodiments, converting the plane mask of the training image from the pixel coordinate system to the plane coordinate system comprises:

converting the plane mask of the training image from a pixel coordinate system to a world coordinate system;

the planar mask of the training image is converted from the world coordinate system to the planar coordinate system.

In some embodiments, determining the boundary line of the plane mask under the plane coordinate system includes:

carrying out edge detection on the plane mask under a plane coordinate system;

carrying out Hough line detection on the plane mask in a plane coordinate system based on the detected edge of the plane mask;

determining a probability that the detected straight line is a boundary line of the plane mask;

based on the probability, one boundary line of the plane mask in the plane coordinate system is determined from the detected straight line.

In some embodiments, determining the probability that the detected straight line is the boundary line of the plane mask comprises:

and determining the probability that the detected straight line is the boundary line of the plane mask according to the difference information of the symmetrical regions on both sides of the straight line, wherein the probability that the straight line is the boundary line of the plane mask is higher the larger the difference of the symmetrical regions on both sides of the straight line is.

In some embodiments, determining one boundary line of the plane mask under the plane coordinate system from the detected straight lines includes:

selecting a pair of straight lines having a perpendicular relationship and a parallel relationship from the detected straight lines;

under the condition of finding out a straight line pair, determining the straight line with the highest probability and the highest probability in the highest straight line pair as a boundary line of the plane mask in a plane coordinate system;

and under the condition that the straight line pair is not found, determining the straight line with the highest probability as a boundary line of the plane mask in a plane coordinate system.

In some embodiments, determining the boundary line of the plane mask under the plane coordinate system further comprises at least one of:

before edge detection, median filtering is carried out on the plane mask in a plane coordinate system;

after hough line detection, the detected lines are merged based on the slopes of the lines.

In some embodiments, determining the inscribed rectangle of the plane mask under the plane coordinate system comprises: and determining an inscribed rectangle of the plane mask parallel to the boundary line under a plane coordinate system, wherein the inscribed rectangle comprises a maximum inscribed square.

In some embodiments, the embedding the image to be embedded into the plane mask of the predicted video frame image comprises:

determining a transformation matrix of the plane mask of the image to be embedded to the predicted video frame image according to the mapping relation between 4 vertexes of the image to be embedded and 4 key points in the plane mask of the predicted video frame image;

and based on the transformation matrix, transforming each foreground point of the image to be embedded into a corresponding position area of 4 key points in a plane mask of the predicted video frame image.

In some embodiments, the deep learning model employs a loss function determined based on 4 keypoints in the annotation information and the predicted 4 keypoints after the alignment operation;

the aligning operation of the predicted 4 key points comprises the following steps:

determining a transformation ratio based on 4 key points in the labeling information and the predicted 4 key points;

according to the transformation proportion, carrying out size transformation on the predicted 4 key points;

determining first position transformation information based on 4 key points in the labeling information;

determining second position transformation information based on the predicted 4 keypoints;

and respectively adding the first position transformation information to the predicted 4 key points after size transformation and subtracting the second position transformation information to finish the alignment operation of the predicted 4 key points.

In some embodiments, the deep learning model comprises a regional convolutional neural network.

In some embodiments, the image to be embedded includes an enterprise identification image and a product image.

Some embodiments of the present disclosure provide a plane prediction model obtaining method, including:

marking a plane detection box, a plane mask and 4 key points in the plane mask in the training image;

training a deep learning model by using a training image with labels of a plane detection frame and a plane mask and marking information of 4 key points in the plane mask;

and determining the trained deep learning model as a plane prediction model.

In some embodiments, labeling 4 keypoints in a plane mask in a training image comprises:

converting the plane mask of the training image from a pixel coordinate system to a plane coordinate system;

determining a boundary line of a plane mask under a plane coordinate system;

determining an inscribed rectangle of the plane mask in a plane coordinate system based on the boundary line of the plane mask;

the 4 vertices of the inscribed rectangle of the plane mask are converted from the plane coordinate system to the pixel coordinate system.

Some embodiments of the present disclosure provide an apparatus for embedding an image in a video, including:

a memory; and

a processor coupled to the memory, the processor configured to perform the method of embedding images in video according to any of the embodiments based on instructions stored in the memory.

Some embodiments of the present disclosure provide a plane prediction model obtaining apparatus, including:

a memory; and

a processor coupled to the memory, the processor configured to perform the plane prediction model acquisition method of any of the embodiments based on instructions stored in the memory.

Some embodiments of the present disclosure provide a non-transitory computer-readable storage medium having stored thereon a computer program, which when executed by a processor, implements the method for embedding an image in a video according to any of the embodiments or the method for acquiring a plane prediction model according to any of the embodiments.

Drawings

The drawings that will be used in the description of the embodiments or the related art will be briefly described below. The present disclosure can be understood more clearly from the following detailed description, which proceeds with reference to the accompanying drawings.

It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without undue inventive faculty.

Fig. 1 illustrates a flow diagram of a planar prediction model acquisition method of some embodiments of the present disclosure.

Fig. 2 is a schematic flow chart diagram illustrating a plane prediction model acquisition method according to further embodiments of the present disclosure.

FIG. 3 illustrates a schematic diagram of a deep learning model of some embodiments of the present disclosure.

Fig. 4 illustrates a flowchart of labeling 4 keypoints in a plane mask in a training image according to some embodiments of the present disclosure.

FIG. 5 illustrates a schematic diagram of three coordinate systems of some embodiments of the present disclosure.

Fig. 6 illustrates a flow diagram of a method of embedding an image in a video according to some embodiments of the present disclosure.

Fig. 7 is a flow diagram illustrating a method for embedding an image in a video according to further embodiments of the disclosure.

Fig. 8 shows a schematic diagram of an apparatus for embedding an image in a video according to some embodiments of the present disclosure.

Fig. 9 shows a schematic diagram of a planar prediction model acquisition apparatus according to some embodiments of the present disclosure.

Detailed Description

The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure.

Unless otherwise specified, "first", "second", and the like in the present disclosure are described to distinguish different objects, and are not intended to mean size, timing, or the like.

The embodiment of the disclosure automatically finds the plane mask widely existing in each video frame image through the plane prediction model, and embeds the image to be embedded into the plane mask, so that the image is automatically and naturally fused into the video, and the image is more widely fused into the video.

Fig. 1 illustrates a flow diagram of a planar prediction model acquisition method of some embodiments of the present disclosure. The plane prediction model is capable of predicting a plane mask in an image.

As shown in fig. 1, the method of this embodiment includes:

in step 110, the plane detection box and the plane mask in the training image are labeled.

The plane in the training image may be labeled with the detection frame and the mask, or a ready-made data set of the training image with the detection frame and the mask labeled with the plane may be obtained, for example, a PlaneRCNN data set, which may provide not only the training image with a label of the plane detection frame and the plane mask, but also a camera parameter related to the training image, and a Rotation/translation matrix (Rotation/translation matrix) from a camera coordinate system to a world coordinate system.

In step 120, a deep learning model is trained by using a training image with a label of a plane detection frame and a plane mask, so that the deep learning model has the learning capability of the plane detection frame and the plane mask of the image.

The deep learning model includes a region-based Convolutional Neural Networks (RCNN), such as MaskRCNN network, which is one of the RCNNs. The MaskRCNN network and other deep learning models comprise branches of plane detection box regression and branches of plane mask regression of images. The branches of the plane detection box regression include plane detection box regression and may also include semantic category regression. The training image with the label of the plane detection frame trains the branch of the plane detection frame regression, and the training image with the label of the plane mask trains the branch of the plane mask regression.

In the training process, the total loss is determined according to the loss between the label of the labeled plane detection frame of each training image and the model-predicted plane detection frame and the loss between the label of the labeled plane mask and the model-predicted plane mask, the parameter of the deep learning model is updated according to the total loss, and the training process is iteratively executed until a training termination condition is met, for example, a preset iteration number is reached, or the total loss is less than a certain value, and the like.

In step 130, the trained deep learning model is determined as a plane prediction model. The plane prediction model can predict a plane mask in a plane detection box and a plane detection box of an image.

Fig. 2 is a schematic flow chart diagram illustrating a plane prediction model acquisition method according to further embodiments of the present disclosure. The plane prediction model can predict not only the plane mask in the image but also 4 key points in the plane mask.

As shown in fig. 2, the method of this embodiment includes:

in step 210, the plane detection box, the plane mask and 4 key points in the plane mask in the training image are labeled.

The 4 key points in the plane mask may be labeled in the middle of the plane mask, for example. The following embodiment of fig. 4 will specifically describe the labeling method of 4 key points in the plane mask.

In step 220, a deep learning model is trained by using a training image with labels of a plane detection frame and a plane mask and labeling information of 4 key points in the plane mask, so that the deep learning model has the learning capabilities of the plane detection frame, the plane mask and the 4 key points of the image.

The deep learning model includes RCNN, for example, MaskRCNN network, which is one of RCNN. FIG. 3 shows a schematic diagram of a deep learning model. As shown in fig. 3, the deep learning model such as MaskRCNN network includes branches of plane detection box regression, branches of plane mask regression, and branches of keypoint regression of the image. The branches of the plane detection box regression include plane detection box regression and may also include semantic category regression. The training image with the label of the plane detection frame trains the regression branch of the plane detection frame, the training image with the label of the plane mask trains the regression branch of the plane mask, and the training image with the label information of 4 key points trains the regression branch of the key points. The MaskRCNN network uses a roiign (region of interest alignment) method, which is a candidate region (pro-spatial region) obtained from an original image (original image).

Since the image embedding position is movable on the plane, as long as the image embedding position is on the plane and the boundary lines of the image and the plane are parallel to each other, the result predicted by the deep learning model is considered to be correct, so that the loss function adopted by the deep learning model is a loss function after the key points are aligned, namely, a loss function determined based on 4 key points in the annotation information and the predicted 4 key points after the alignment operation is performed. For example, the MaskRCNN network employs the Smooth _ L1 penalty after keypoint alignment.

Setting the key point label for memorizing the current plane label as gt E RN×4×2Setting the coordinates of the key points of the network prediction as pre ∈ RN×4×2In the dimensional space R, N denotes the number of planes, 4 denotes 4 key points, and 2 denotes the abscissa and ordinate of the planes. The aligned key point coordinate is pre', the loss of network key point branch is losskLoss iskThe calculation of (1) is as follows.

(1) And determining a transformation ratio r based on the 4 key points in the labeling information and the predicted 4 key points.

Wherein max represents taking the maximum value, and min represents taking the minimum value.

(2) According to the transformation ratio, the predicted 4 key points are subjected to size transformation, and the predicted key points after size transformation are set as pre'.

pre′=(pre-min(pre))*r+min(pre)

(3) Determining first position transformation information gt based on 4 key points in annotation informationc

(4) Determining second position transformation information pre 'based on the predicted 4 key points'c

(5) And respectively adding the first position transformation information to the predicted 4 key points with the changed sizes and subtracting the second position transformation information to finish the alignment operation of the predicted 4 key points, wherein the aligned key points are set as pre ".

pre″=pre′+gtc-pre′c

(6) Loss of network key point branchkComprises the following steps:

and aligning the 4 key points, so that a quadrilateral area formed by the 4 key points is positioned in the middle of the plane mask.

In the training process, a total loss is determined according to the loss between the label of the labeled plane detection frame of each training image and the plane detection frame predicted by the model, the loss between the label of the labeled plane mask and the plane mask predicted by the model, and the loss between the 4 labeled key points and the 4 predicted key points after the alignment operation, parameters of the deep learning model are updated according to the total loss, and the training process is iteratively executed until a training termination condition is met, for example, a preset iteration number is reached, or the total loss is less than a certain value, and the like.

In step 230, the trained deep learning model is determined as a plane prediction model. The plane prediction model can predict a plane detection box of the image, a plane mask in the plane detection box, and 4 key points in the plane mask.

Fig. 4 illustrates a flowchart of labeling 4 keypoints in a plane mask in a training image according to some embodiments of the present disclosure.

As shown in fig. 4, the method of this embodiment includes:

at step 410, a training image containing a plane is acquired.

Many images contain planes such as, but not limited to, a desktop, a wall surface, various surfaces of a cabinet, a floor, etc. A side surface of a cabinet is shown in fig. 4.

At step 420, a plane mask of the training image in the pixel coordinate system is obtained.

As described above, the plane mask of the training image in the pixel coordinate system can be obtained through labeling, and the training image and the plane mask thereof in the pixel coordinate system can also be obtained through the ready-made PlaneRCNN dataset.

At step 430, the plane mask of the training image is converted from the pixel coordinate system to a plane coordinate system, including (1-2):

(1) and converting the plane mask of the training image from the pixel coordinate system to the world coordinate system according to the camera parameters related to the training image and the rotation and translation matrix from the camera coordinate system to the world coordinate system.

Coordinates in pixel coordinate system: coordinates on an image obtained after a camera takes a scene, and a pixel coordinate system is a two-dimensional coordinate system.

The coordinates of the foreground point in the plane mask of the training image under the pixel coordinate system are set asThe coordinates of the foreground point in the plane mask of the training image in the world coordinate system are set asN represents the number of foreground points.

(2) The planar mask of the training image is converted from the world coordinate system to the planar coordinate system.

Coordinates in a planar coordinate system: corresponding to the coordinates on the image obtained after the camera shoots the plane, the depth value of each foreground point on the plane is the same under the plane coordinate system. The planar coordinate system is a two-dimensional coordinate system.

Fig. 5 shows a schematic representation of three coordinate systems. From left to right, a pixel coordinate system, a world coordinate system and a plane coordinate system are arranged in sequence.

The coordinates of the foreground points in the plane mask under the plane coordinate system are set as

Finding two points in a plane mask under the world coordinate system (x)1,y1,z1)∈Sworld、B=(x2,y2,z2)∈SworldThen find a point C ═ x (x) on the example on the world coordinate system3,y3,z3) So thatTaking the A as the origin point of the image,is taken as the x-axis and is,a planar coordinate system is constructed for the y-axis.

The coordinates of point C are calculated.

Normal to known examplesOffset d, a ═ x1,y1,z1),B=(x2,y2,z2) Because ofAnd the normal of the plane where the point C is located isThe following relationship is obtained:

if vectorParallel to the x-axis, then (x)3,y3,z3)=(x1,y1,z1+1);

Otherwise, if the vector isAnd the x-axis are not parallel, then:

x3=0

from the above, a point a in the world coordinate system is obtained as (x)1,y1,z1)、B=(x2,y2,z2)、C=(x3,y3,z3)。

Since A is the origin in the plane coordinate system, andthe coordinates of the point A, B, C in the planar coordinate system are:

A′=(0,0,0)

obtaining a transformation matrix M between the world coordinate system and the plane coordinate system according to the coordinates of the three points in the world coordinate system and the plane coordinate system, and calculating according to the transformation matrix M

Thereby obtainingI.e. the coordinates of the foreground points of the plane mask under the plane coordinate system.

Each plane has its own plane coordinate system, and an inscribed rectangle (such as a maximum inscribed square) of the plane mask can be more easily found in the plane coordinate system, and 4 vertexes of the inscribed rectangle serve as 4 key points.

At step 440, the plane mask is median filtered under the plane coordinate system, as: the mask is the mediafilter (mask), the mask on the right side is the plane mask before filtering, and the mask on the left side is the plane mask after filtering.

Median filtering is a nonlinear smoothing technique, which sets the gray value of each pixel point as the median of all the gray values of the pixel points in a certain neighborhood window of the point.

At step 450, edge detection is performed on the plane mask in the plane coordinate system, which is expressed as: edge (mask).

The edge detection technique can refer to the prior art.

In step 460, based on the detected edge of the plane mask, Hough (Hough) straight line detection is performed on the plane mask in the plane coordinate system, which is expressed as: lines ═ houghlinedetect (edges). The hough line detection method can refer to the prior art.

Further, screening out straight lines keep _ lines with pixel points larger than a set threshold voteThresh from the detected straight lines: keep _ lines ═ linej|linej(pixel)≥voteThresh,linejE.g., lines }, wherein linesj(pixel) represents the number of pixels contained in the j-th line detected on the plane mask.

At step 470, the detected lines are merged based on their slopes, as: merge _ lines (key _ lines), where merge lines with similar slopes into one line.

At step 480, the probability that the detected straight line is the boundary line of the plane mask is determined, and a pair of straight lines having a perpendicular relationship and a parallel relationship is selected from the detected straight lines, expressed as: chop _ lines ═ chopeline (merge _ lines).

Determining the probability that the detected straight line is the boundary line of the plane mask according to the difference information of the symmetrical regions at the two sides of the straight line, and expressing as follows:

linek∈merge_lines

wherein region1 (line)k) And region2 (line)k) Respectively representing a line of a straight linekAnd the symmetrical areas with fixed widths on two sides, wherein N is the number of pixel points in the areas, and valuThresh is a set threshold value. The greater the difference of the symmetric regions on both sides of the straight line, the greater the probability that the straight line is the boundary line of the plane mask.

In step 490, in the case of finding a straight line pair, determining the straight line with the highest probability and the highest probability in the highest straight line pair as a boundary line of the plane mask in the plane coordinate system; when no straight line pair is found, the straight line with the highest probability is determined as a boundary line of the plane mask in the plane coordinate system, so as to obtain the boundary line of the plane mask in the plane coordinate system, which is denoted as BestLine (geos _ lines). And determining an inscribed rectangle of the plane mask under the plane coordinate system based on the boundary line.

Determining an inscribed rectangle of the plane mask parallel to the boundary line under a plane coordinate system, wherein the inscribed rectangle is, for example, a maximum inscribed square and is expressed as: a square ═ maxsingudsquare (mask), where square _ edgei// BestLine, four vertices of the largest inscribed square are

At step 4100, 4 vertices of the inscribed rectangle of the plane mask are converted from the plane coordinate system to the pixel coordinate system.

As previously described, the coordinates of the foreground points in the plane mask of the training image in the pixel coordinate system are knownAnd, coordinates of the foreground points of the plane mask under the plane coordinate systemA transformation matrix T of the pixel coordinate system and the planar coordinate system is thus obtained, i.e.4 vertices of inscribed rectangle in plane mask under plane coordinate system based on the foregoing determinationDetermining the coordinate representation of 4 key points in the plane mask of the training image under the pixel coordinate system as follows:

the method comprises the steps of automatically searching 4 key points inscribed in a plane mask in a training image, and training a model as training data, so that the model can predict the 4 key points inscribed in the plane mask in a video frame image, the image can be embedded into a proper position in a video frame image, and the fusion effect of the image and the video is further improved.

Fig. 6 illustrates a flow diagram of a method of embedding an image in a video according to some embodiments of the present disclosure.

As shown in fig. 6, the method of this embodiment includes:

in step 610, a video frame image of a segment of video is input into a plane prediction model, and a plane mask of the predicted video frame image is obtained.

The plane prediction model is obtained by training a deep learning model with a training image having a label with a plane detection frame and a plane mask, which is specifically referred to the foregoing embodiment.

In step 620, the image to be embedded is embedded into a plane mask of the predicted video frame image.

For example, the image to be embedded is embedded in a position region parallel to the boundary line of the plane mask in the plane mask.

Examples of images to be embedded include, but are not limited to, business identification images, product images, character images, and advertisement images.

The method automatically finds the plane mask which is widely existed in each video frame image, and embeds the image to be embedded into the plane mask, so that the image is automatically and naturally fused into the video, and the image is more widely fused into the video.

Fig. 7 is a flow diagram illustrating a method for embedding an image in a video according to further embodiments of the disclosure.

As shown in fig. 7, the method of this embodiment includes:

in step 710, a video frame image of a segment of video is input into a plane prediction model, and a plane mask of the predicted video frame image and 4 key points therein are obtained.

The plane prediction model is obtained by training a deep learning model with a training image having labels of a plane detection frame and a plane mask and labeling information of 4 key points in the plane mask, which is specifically referred to the foregoing embodiment.

In step 720, 4 vertices of the image to be embedded are mapped to 4 key points in the plane mask of the predicted video frame image, and the image to be embedded is embedded into the corresponding position areas of the 4 key points in the plane mask of the predicted video frame image.

In particular, according to the image I to be embeddedad4 vertices (coordinates (0,0), (w,0), (0, h), (w, h)) of (resolution w × h) and predicted video frame image IrgbThe mapping relation of 4 key points pre' in the plane mask code determines the image I to be embeddedadTo predicted video frame image IrgbIs determined by the transformation matrix M ∈ R of the plane mask3*3(ii) a Based on the transformation matrix, each foreground point of the image to be embedded is transformed into a corresponding location area of 4 key points in the plane mask of the predicted video frame image, i.e. for IrgbEach pixel point p of the position area formed by the upper 4 key pointsrgb∈R1*2Through [ p ]ad,1]T=M[prgb,1]TIn IadAbove find prgbCorresponding pixel point pad∈R1*2Finally p is addedadIs assigned to prgb

The method comprises the steps of automatically searching a plane mask widely existing in each video frame image and 4 key points in the plane mask, embedding an image to be embedded into position areas corresponding to the 4 key points in the plane mask, automatically and naturally fusing the image into the video, and improving the fusion effect of the image and the video.

Fig. 8 shows a schematic diagram of an apparatus for embedding an image in a video according to some embodiments of the present disclosure.

As shown in fig. 8, the apparatus 800 for embedding an image in a video according to this embodiment includes: a memory 810 and a processor 820 coupled to the memory 810, the processor 820 being configured to perform a method of embedding an image in a video in any of the embodiments described above based on instructions stored in the memory 810.

Memory 810 may include, for example, system memory, fixed non-volatile storage media, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader (Boot Loader), and other programs.

The apparatus 800 may also include an input-output interface 830, a network interface 840, a storage interface 850, and the like. These interfaces 830, 840, 850 and the memory 810 and the processor 820 may be connected, for example, by a bus 860. The input/output interface 830 provides a connection interface for input/output devices such as a display, a mouse, a keyboard, and a touch screen. The network interface 840 provides a connection interface for various networking devices. The storage interface 850 provides a connection interface for external storage devices such as an SD card and a usb disk.

Fig. 9 shows a schematic diagram of a planar prediction model acquisition apparatus according to some embodiments of the present disclosure.

As shown in fig. 9, the plane prediction model acquisition apparatus 900 of this embodiment includes: a memory 910 and a processor 920 coupled to the memory 910, wherein the processor 920 is configured to execute the plane prediction model obtaining method in any of the embodiments based on instructions stored in the memory 910.

Memory 910 may include, for example, system memory, fixed non-volatile storage media, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader (Boot Loader), and other programs.

The apparatus 900 may also include an input-output interface 930, a network interface 940, a storage interface 950, and the like. These interfaces 930, 940, 950 and the memory 910 and the processor 920 may be connected, for example, by a bus 960. The input/output interface 930 provides a connection interface for input/output devices such as a display, a mouse, a keyboard, and a touch screen. The network interface 940 provides a connection interface for various networking devices. The storage interface 950 provides a connection interface for external storage devices such as an SD card and a usb disk.

The apparatus 800 for embedding an image in a video may be different from or the same as the plane prediction model acquisition apparatus 900. For example, the apparatus 800 for embedding an image in a video and the plane prediction model acquisition apparatus 900 may be deployed on one computer or on two computers.

Some embodiments of the present disclosure provide a non-transitory computer-readable storage medium having stored thereon a computer program that, when executed by a processor, implements a method of embedding an image in a video or a planar prediction model acquisition method.

As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more non-transitory computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer program code embodied therein.

The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

The above description is only exemplary of the present disclosure and is not intended to limit the present disclosure, so that any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种延时摄影视频合成方法、装置、电子设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类