Method and system for simulating three-dimensional image sequence

文档序号:292517 发布日期:2021-11-23 浏览:17次 中文

阅读说明:本技术 一种用于模拟三维图像序列的方法和系统 (Method and system for simulating three-dimensional image sequence ) 是由 杰瑞·尼姆斯 威廉·M·卡尔斯泽斯 塞缪尔·波尔 于 2020-01-27 设计创作,主要内容包括:一种用于从2-D图像帧(110)序列模拟3-D图像序列的方法,该方法包括:从多个不同的观测点捕获场景的多个2-D图像帧(110),其中,在序列中的每个图像帧(110)内识别第一近端平面和第二远端平面,并且其中,每个观测点为每个图像帧保持基本相同的第一近端图像平面;确定序列中的每个图像帧内的第一近端平面和第二远端平面的深度估计;基于每个图像帧(110)的第二远端平面的深度估计,对齐序列中的每个图像帧(110)的第一近端平面,并移动序列中的每个后续图像帧(110)的第二远端平面,从而产生对应于每个2-D图像帧的修改图像帧;以及依次显示修改图像帧。还公开了一种包括用于执行上述方法的装置的系统。(A method for simulating a 3-D image sequence from a sequence of 2-D image frames (110), the method comprising: capturing a plurality of 2-D image frames (110) of a scene from a plurality of different points of view, wherein a first near plane and a second far plane are identified within each image frame (110) in the sequence, and wherein each point of view maintains a substantially identical first near image plane for each image frame; determining a depth estimate for a first near-end plane and a second far-end plane within each image frame in the sequence; aligning the first near plane of each image frame (110) in the sequence and moving the second far plane of each subsequent image frame (110) in the sequence based on the depth estimate of the second far plane of each image frame (110), thereby generating a modified image frame corresponding to each 2-D image frame; and displaying the modified image frames in sequence. A system comprising means for performing the above method is also disclosed.)

1. A method of simulating a 3-D image sequence from a sequence of 2-D image frames, the method comprising:

capturing a plurality of 2-D image frames of a scene from a plurality of different observation points, wherein a first near plane and a second far plane are identified within each image frame in the sequence, and wherein each observation point maintains a substantially identical first near image plane for each image frame;

determining a depth estimate for a first near-end plane and a second far-end plane within each image frame in the sequence;

aligning the first near plane of each image frame in the sequence and moving the second far plane of each subsequent image frame in the sequence based on the depth estimate of the second far plane of each image frame, thereby generating a modified image frame corresponding to each 2-D image frame; and

and sequentially displaying the modified image frames.

2. The method of claim 1, comprising determining three or more planes for each image frame in the sequence, and optionally wherein the planes have different depth estimates.

3. The method of claim 2, wherein each respective plane moves based on a difference between a depth estimate of the respective plane and the first proximal plane.

4. The method of any preceding claim, wherein the first near-end planes of each modified image frame are aligned such that they lie in the same pixel space, and/or optionally wherein the first planes comprise key object points.

5. The method of any of claims 2 to 4, wherein the plane comprises at least one foreground plane, and/or optionally wherein the plane comprises at least one background plane.

6. The method of any one of the preceding claims, wherein the sequential observation points lie on a straight line, or wherein the sequential observation points lie on a curved line.

7. The method of any one of claims 1 to 6, wherein the sequential observation points are separated by a distance of between 50-80, and optionally wherein the sequential observation points are separated by a distance of 64 mm.

8. A system for simulating a 3-D image sequence from a sequence of 2-D image frames, comprising:

image capture means for capturing a plurality of 2-D image frames of a scene from a plurality of different sequential observation points, wherein a first proximal plane and a second distal plane are identified within each image frame in the sequence;

displacement means for displacing the image capture means to the sequential observation point to maintain a substantially identical first proximal image plane for each image frame;

means for determining a depth estimate for a first near-end plane and a second far-end plane within each image frame in the sequence;

to generate modified image frames corresponding to each 2-D image frame, means for aligning a first near plane of each subsequent image frame in the sequence based on a depth estimate of a second far plane of each image frame, and means for moving the second far plane of each subsequent image frame in the sequence based on a depth estimate of the second far plane of each image frame; and

and the display device is used for sequentially displaying the modified image frames.

9. The system of claim 8, comprising means for determining three or more planes for each image frame in the sequence, and optionally wherein the planes have different depth estimates.

10. The system of claim 9, wherein each respective plane moves based on a difference between the depth estimate of the respective plane and the first proximal plane.

11. The system of any of claims 8 to 10, comprising means for aligning the first near-end plane of each modified image frame such that the first near-end planes are located at the same pixel space of the display device, and/or optionally wherein the first near-end planes comprise key object points.

12. The system of any of claims 8 to 15, wherein the plurality of planes comprises at least one foreground plane, and/or optionally wherein the plurality of planes comprises at least one background plane.

13. The system of any of claims 8 to 12, wherein the displacement device displaces the image capture device to the sequential observation point along a linear path, or wherein the displacement device displaces the image capture device to the sequential observation point along a linear path.

14. The system of any one of claims 8 to 13, wherein the sequential observation points are separated by a distance of between 50-80, and optionally wherein the sequential observation points are separated by a distance of 64 mm.

15. A non-transitory computer-readable storage medium storing instructions that, when executed by a processor, cause the processor to perform the method of any of claims 1-7.

Background

Depth perception is based on a number of cues, where binocular disparity and motion disparity generally provide more accurate depth information than image cues. Binocular disparity and motion disparity provide two independent quantitative cues for depth perception. Binocular disparity refers to the difference in position between two retinal image projections of a point in 3D space. As shown in fig. 1A and 1B, the robust perception of depth obtained when viewing the object 102 in the image scene 110 indicates that the brain can compute depth from binocular disparity cues only. In binocular vision, the binocular view 112 is a locus of points in space having the same parallax as the gaze point 114. Objects located on a horizontal line through the point of regard 114 produce a single image, while objects at reasonable distances from the line produce two images 116, 118.

By using two images 116, 118 of the same object 102 obtained from slightly different angles, it is possible to triangulate the distance to the object 102 with high accuracy. Each eye views a slightly different angle of the object 102 seen by the left eye 104 and the right eye 106. This occurs due to the horizontal separation parallax of the eyes. If the object is far away, the disparity 108 of the image 110 falling on the two retinas will be small. If the object is close or near, the parallax will become large.

Motion parallax 120 refers to the relative image motion (between objects at different depths) caused by the translation of the viewer 104. Apart from the two eyes and the image depth cues, motion parallax 120 may also provide accurate depth perception if it is accompanied by an auxiliary signal specifying a change in the direction of the eyes relative to the visual scene 110. As shown, as the eye direction 104 changes, the relative motion of the surface of the object 102 relative to the background gives an indication of its relative distance. If the object 102 is far away, the object 102 appears to be stationary. If the object 102 is close or near, the object 102 appears to move faster.

In order to view the object 102 at a close distance and fuse the images on the two retinas into one object, the visual axes of the two eyes 104, 106 converge on the object 102. The muscular action of changing the focal length of the lens of the eye to place a focused image on the fovea of the retina is called accommodation. Both muscle action and lack of focus at adjacent depths provide additional information to the brain that can be used to perceive depth. Image sharpness is a blurred depth cue. However, by changing the focal plane (looking closer and/or further away than the object 102), the blur is resolved.

Fig. 2A and 2B show a graphical representation of the anatomy of the eye 200 and the distribution of rods and cones, respectively. Fovea 202 is responsible for acute foveal vision (also known as foveal vision), which is necessary where visual detail is critical. The fovea 202 is a depression in the inner surface 204 of the retina, approximately 1.5mm wide, and is composed entirely of cone cells 204 dedicated to maximum visual acuity. Rod cells 206 are low intensity receptors for receiving gray scale information and are important for peripheral vision, while cone cells 204 are high intensity receptors for receiving color visual information. The importance of fovea 202 will be more clearly understood with reference to fig. 2B, which illustrates the distribution of cones 204 and rods 206 in eye 200. As shown, the majority of the cone cells 204 providing the highest visual acuity are located within a 1.5 angle around the center of the fovea 202.

Fig. 3 shows a typical field of view 300. As shown, fovea 202 can only see the center 1.5 of field of view 302, with preferred field of view 304 lying within + -15 of the center of fovea 202. Thus, focusing an object on the fovea depends on the linear size, viewing angle, and viewing distance of the object 102. A large object 102 viewed at a close distance will have a large viewing angle outside the foveal vision, while a small object 102 viewed at a far distance will have a small viewing angle inside the foveal vision. An object 102 falling within the foveal vision will be produced with high visual acuity in the brain. However, under natural viewing conditions, the viewer does not perceive it only passively. Instead, they dynamically scan the visual scene 110 by moving their eye gaze and focus between objects of different line-of-sight distances. In doing so, the eye movement process (angle between the lines of sight of the left and right eyes 104, 106) that adjusts and vergence must move in synchrony to place new objects in clear focus in the center of each retina. Thus, natural conditions reflectively link accommodation and vergence, and changes in one process naturally drive corresponding changes in the other process.

Conventional stereoscopic displays force viewers to attempt to decouple these processes because while they must dynamically change the vergence angle to view objects at different stereoscopic distances, they must remain adjusted at a fixed distance or the entire display will lose focus. When viewing such displays, this decoupling can produce eye strain and affect image quality.

It is an object of the present invention to overcome or alleviate these known problems.

Disclosure of Invention

According to a first aspect of the invention, there is a method of simulating a 3-D image sequence from a sequence of 2-D image frames, the method comprising: capturing a plurality of 2-D image frames of a scene from a plurality of different observation points, wherein a first near-end plane and a second far-end plane are identified within each image frame in the sequence, and wherein each observation point maintains a substantially identical first near-end image plane for each image frame; determining depth estimates for first near-end and second far-end planes within each image frame in the sequence; aligning the first near plane of each image frame in the sequence and moving the second far plane of each subsequent image frame in the sequence based on the depth estimate of the second far plane of each image frame to generate a modified image frame corresponding to each 2-D image frame; and displaying the modified image frames in sequence.

The invention changes the focus of objects at different planes in the display scene to match the vergence and the stereoscopic retinal parallax requirements, thereby better simulating natural viewing conditions. By adjusting the focus of key objects in the scene to match their stereo retinal disparity, cues for visual accommodation and vergence are consistent. As in natural vision, the viewer focuses different objects by varying adjustments. As the mismatch between accommodation and vergence decreases, natural viewing conditions can be better simulated and eye strain reduced.

Preferably, the method further comprises determining three or more planes for each image frame in the sequence.

Furthermore, it is preferred that the planes have different depth estimates.

Furthermore, it is preferred that each respective plane is moved based on a difference between the depth estimate of the respective plane and the first proximal plane.

Preferably, the first near-end planes of each modified image frame are aligned such that the first near-end planes are located in the same pixel space.

It is also preferred that the first plane comprises key object points.

Preferably, the planes include at least one foreground plane.

Furthermore, it is preferred that the planes comprise at least one background plane.

Preferably, the sequential observation points are located on a straight line.

Preferably, the sequential observation points lie on a curve.

It is also preferred that the sequential observation points are separated by a distance of between 50-80 mm.

Further, it is preferable that the sequential observation points are separated by a distance of 64 mm.

According to a second aspect of the invention, there is a system for simulating a 3-D image sequence from a sequence of 2-D image frames, comprising: image capture means for capturing a plurality of 2-D image frames of a scene from a plurality of different sequential observation points, wherein a first proximal plane and a second distal plane are identified within each image frame in the sequence; displacement means for displacing the image capture means to a sequential observation point to maintain a substantially identical first near-end image plane for each image frame; means for determining a depth estimate for a first near-end plane and a second far-end plane within each image frame in the sequence; to generate modified image frames corresponding to each 2-D image frame, means for aligning a first near plane of each subsequent image frame in the sequence based on a depth estimate of a second far plane of each image frame and means for estimating a second far plane of each subsequent image frame in the sequence of movements based on a depth estimate of the second far plane of each image frame; and a display device for sequentially displaying the modified image frames.

Preferably, the system comprises means for determining three or more planes for each image frame in the sequence.

Furthermore, it is preferred that the planes have different depth estimates, and wherein each respective plane is moved based on a difference between the depth estimate of the respective plane and the first proximal plane.

It is also preferred that the system comprises means for aligning the first near-end plane of each modified image frame such that the first near-end planes are located at the same pixel space of the display device.

Preferably, the first proximal plane comprises a key object point.

It is also preferred that the plurality of planes comprises at least one foreground plane and at least one background plane.

Preferably, the displacement means displaces the image capture device to the sequential observation point along a linear path.

It is also preferred that the displacement means displaces the image capture device to the sequential observation point along a linear path.

It is also preferred that the sequential observation points are separated by a distance of between 50-80 mm.

Further, it is preferable that the sequential observation points are separated by a distance of 64 mm.

According to a second aspect of the invention, there is a non-transitory computer readable storage medium storing instructions which, when executed by a processor, cause the processor to perform a method according to the second aspect of the invention.

Drawings

Specific embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings, in which:

FIG. 1A illustrates a 2D rendering of an image based on a change in orientation of a viewer relative to a display;

FIG. 1B illustrates a 2D rendering of an image with binocular disparity due to horizontal separation disparity of the left and right eyes;

FIG. 2A is a cross-sectional view of an eyeball structure;

FIG. 2B is a graph of the density of rods and cones versus foveal position;

FIG. 3 is a top view of the field of view of an observer;

fig. 4A is a view showing a relationship between an interocular distance and a straight-line image capturing distance of a camera apparatus according to the present invention;

fig. 4B is a view showing a relationship between an interocular distance and a curved image capturing distance of a camera device according to the present invention;

FIG. 5A is a perspective view of the camera device of FIG. 4B;

FIG. 5B is a side view of the camera device of FIG. 4B;

FIG. 6 is a flow chart of the process steps for converting a series of acquired stereoscopic images into a sequence of 3-D DIF images in accordance with the present invention; and

fig. 7 is a diagram illustrating the geometrical displacement of points between two successive image frames according to the invention.

Detailed Description

As described above, the sense of depth of a stereoscopic image varies depending on the distance between the camera and the key object, which is referred to as an image capturing distance. Depth perception is also controlled by the vergence angle and the interocular distance at which each successive image is captured by the camera affecting binocular parallax.

Binocular parallax is a stereoscopic perception factor due to the average distance between the left and right eyes (which ranges from about 50 to 80mm, with an average of about 64 mm). When the binocular parallax is relatively large, the observer may feel that the distance to the key object is relatively close. When the binocular parallax is relatively small, the observer may feel that the distance to the key object is relatively large. The vergence angle refers to the angle between the left and right eyes with the key object as the vertex when the eyes are focused on the key object. As the vergence angle increases (as the eyes rotate inward), the distance of the key object perceived by the viewer is relatively small. As the vergence angle decreases (as the eyes rotate outward), the distance of the critical object perceived by the viewer is relatively large.

For best effect, the interocular distance between the captures of successive images is fixed to match the average separation of the left and right eyes, thereby maintaining a constant binocular disparity. Further, the distance to the key object is selected such that the size of the captured image of the key object is adjusted to fall within the foveal viewing range of the viewer to produce high visual acuity for the key object and maintain a vergence angle of the preferred viewing angle equal to or less than 15 °.

The type of image capture system is selected based on the size and distance of the key object. For image capture distances less than 1.5m, a curved stereo image capture system is used. For image capture distances greater than 1.5m, a rectilinear stereo image capture system is used.

Fig. 4A illustrates a rectilinear stereo image capture system 400 for capturing stereo images (e.g., 2D frames of a 3D sequence). As shown, the linear stereo image capture system 400 has a camera device 402 movably coupled to a linear track 404. The camera device 402 includes a stepper motor (not shown) for moving the camera device 402 along the linear track 404 in precisely defined incremental steps. The camera device 402 also has a camera 406 oriented to capture stereoscopic images of the key object 408 and a control module (not shown) for controlling the direction of the camera 406 and the stepper motor.

In use, the key object 408 is placed at an image capture distance of 4061.5 m or greater. The control module controls the direction of the cameras 406 and the movement of the camera device 402 along the linear track 404 when the cameras 406 capture stereoscopic images at predetermined intervals determined by the interocular distance 410. Ideally, the image capture distance is constant.

If the image capture distance changes due to a change in the distance between camera 406 and key object 408 or a change in the focal length of camera 406 (i.e., zooming in or out), if the interocular distance 410 between the capture of each successive stereoscopic image remains constant, the vergence angle will change accordingly. This results in a change in the vergence angle and thus in a change in the drive adjustment. However, the accommodation distance is still fixed at the display distance, and therefore the natural correlation between vergence and accommodation distance is broken, resulting in so-called vergence-accommodation conflicts, which lead to eye fatigue and poor image quality. To avoid this, the interocular distance 410 between each successive image may be varied to accommodate changes in image capture distance that result in a preferred viewing angle for vergence angles greater than 15 °. For example, as the image capture distance decreases, the vergence angle increases, and the interocular distance 410 between successive images decreases accordingly. Similarly, as the image capture distance increases, the vergence angle decreases, and the interocular distance 410 between successive images increases accordingly.

Fig. 4B illustrates a curvilinear image capture system 420 for capturing stereoscopic images. As shown, the curvilinear image capture system 420 has a camera device 402 (described in more detail below with reference to fig. 5A and 5B) that is movable in a circular path 412 around a fixed point. In the curvilinear image capture system 420, the key object 408 is positioned at or near a fixed point. The camera device 402 includes a stepper motor (not shown) for moving the camera device 402 along the circular path 412 in precisely defined incremental steps corresponding to the interocular distance. The camera device 402 also has a camera 406 oriented to capture stereoscopic images of the key object 408 and a control module (not shown) for controlling the camera 408 and the direction of the stepper motor.

In use, the key object 408 is positioned near a fixed point, at an image capture distance of 1.5m or less from the camera 406. When the cameras 406 capture stereoscopic images at predetermined intervals determined by the interocular distance, the control module controls the direction of the cameras 406 and the movement of the camera device 402 along the circular path 412. Ideally, the focal length of the camera 406 is fixed on the key object 408. However, the interocular distance may change as a change in the focal length of camera 406 (i.e., zooming in or out) results in a change in the vergence angle. To avoid this, the interocular distance between each successive image may be varied to accommodate the variation in focal length resulting in a preferred viewing angle for vergence angles greater than 15 °. For example, as the focal length decreases, the vergence angle increases and the interocular distance between successive images decreases accordingly. Similarly, as the focal length increases, the vergence angle decreases and the interocular distance between successive images correspondingly increases.

Referring now to fig. 5A and 5B, a camera arrangement 500 for a curvilinear image capture system 420 is shown. The camera device 500 has a rectangular camera stage 502 for mounting a camera (not shown), a pivot base 504 about which the camera stage 502 moves, and a pair of radius rods 506 extending radially inward from beneath the camera stage 502 to the pivot base 504. A radius adjustment block 508 is mounted below the camera stage 502 and couples a first end 510 of each radius rod 506 to the camera stage 502. Each radius rod 506 is mounted on the pivot base 504 at a second end 512 opposite the first end 510 and is rotatable about the pivot base 504.

The stepper motor 514 is mounted below the radius adjustment block 508 at a first end 522. A drive shaft 516 extends radially outward from the stepper motor 514 and is coupled to a drive wheel 518. The stepper motor 514 controls the rotation of the drive wheel 518 in precisely defined incremental steps. The camera stage 502 is supported by a second wheel 520, the second wheel 520 being mounted below the radius adjustment block 508 at a second end 524 opposite the first end.

In use, the radius adjustment block 508 adjusts the length of the radius rod 506 extending radially inward from the camera stage 502, thereby adjusting the distance between the camera stage 502 and the pivot base 504, and thus the image capture distance. The radius adjustment block 508 adjusts the length of the radius rod 506 to place the key object 408 at the focal length of the camera 402.

The control module causes the stepper motor 514 to rotate the drive wheel 518 of the camera device 500 in precisely defined incremental steps corresponding to the interocular distance. The camera device is stopped at any time and a stereoscopic image is captured by the camera 402. This process is repeated until the desired number of stereo images have been captured. The stereo image is then processed according to the block diagram of fig. 6.

Fig. 6 shows the process steps performed by a computer system (not shown) to convert acquired stereoscopic images into a sequence of 3-D images. In a first step 602, a computer system is configured to receive, by an image acquisition application, a plurality of stereoscopic images captured by a camera 402. The image acquisition application converts each stereoscopic image into a digital source image, e.g., JPEG, GIF, TIF format. Ideally, each digital source image includes a number of visible objects, or points therein, such as foreground, closest point, background, farthest point, and key objects 408. Foreground and background points are the closest and farthest points, respectively, from the viewer. Depth of field is the depth or distance created in the object field (describing the distance from the foreground to the background). The principal axis is the line perpendicular to the scene passing through the key object 408 point, and the disparity is the displacement of the key object 408 point relative to the principal axis. In digital synthesis, the displacement is always kept an integer number of pixels from the principal axis.

In a second step 604, the computer system identifies key objects 408 in each source image. The key objects 408 identified in each source image correspond to the same key object 408. The computer system may identify key objects 408 based on the depth map of the source image. Similarly, the computer system may use the depth map of the source image to identify the foreground, closest point and the background, farthest point. In a third step 606, the computer system transforms each source image to align the identified key objects 408 in the same pixel space as the previous source image. Horizontal and vertical alignment of each source image requires a three-Dimensional Image Format (DIF) transform. The DIF transform is a geometric displacement that does not change the information acquired at each point in the source image, but can be seen as a displacement of each point in the source image in cartesian space (as shown in fig. 7). As a plenoptic function, the DIF transform is represented by the following equation:

wherein

In the case of a digital image source, the geometric displacement corresponds to the geometric displacement of the pixels containing plenoptic information, then the DIF transform becomes:

(Pixel)x,y=(Pixel)x,yx,y

the computer system may also apply geometric displacement to the background and/or foreground using a DIF transform. The background and foreground may be geometrically displaced according to the respective depths relative to the depths of the key objects 408 identified by the depth map of the source image. Controlling the geometric displacement of the background and foreground relative to the key object 408 controls the motion parallax of the key object 408. As described above, the relative motion of the key object 408 with respect to the surface of the background or foreground provides an indication to the viewer as to its relative distance. In this way, motion parallax is controlled to focus objects at different depths in the displayed scene to match vergence and stereoscopic retinal parallax requirements, thereby better simulating natural viewing conditions. By adjusting the focus of key object 408 in the scene to match its stereo retinal disparity, the cues for visual accommodation and vergence are consistent.

After applying the DIF transform, the source image is compiled into a set of sequences at step 608. The sequence follows the same order as the source image was acquired and the reverse sequence is added at step 610 to create a seamless palindromic loop. At step 612, a time interval is assigned to each frame in the sequence. The time interval between frames may be adjusted at step 614 to provide smooth motion and optimal 3-D viewing. The resulting 3-D image sequence is then output as a DIF sequence at step 616, where the sequence can be viewed on a display device (e.g., a viewing screen capable of projecting information in pixel format, whether the screen is applied on a smartphone, PDA, monitor, TV, tablet, or other viewing device with stereoscopic viewing capabilities, for example, via a parallax barrier, barrier screen, overlay, waveguide, or other viewing technology) or a printer (e.g., a consumer printer, kiosk, dedicated printer, or other hard copy device), to print a multi-dimensional digital master image on, for example, lenticular or other physical viewing material.

This description is for illustrative purposes only and should not be construed to narrow the scope of the present disclosure in any way. Accordingly, it will be appreciated by those skilled in the art that various modifications may be made to the embodiments of the disclosure without departing from the full and fair scope of the disclosure. For example, an image acquisition application may receive a source image in another format, including the DICOM format for medical imaging.

Embodiments of the present invention also relate to an apparatus for performing the operations herein. Such a computer program is stored in a non-transitory computer readable medium. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., computer) readable storage medium (e.g., read only memory ("ROM"), random access memory ("RAM"), magnetic disk storage media, optical storage media, flash memory devices).

The processes and methods described in the figures may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, etc.), software (e.g., embodied on a non-transitory computer readable medium), or a combination of both. Although the processes and methods described above are described in terms of some sequential operations, it should be understood that some of the operations described may be performed in a different order. Further, some operations may be performed sequentially, rather than in parallel.

Embodiments of the present invention are not described with reference to any particular programming language. It should be appreciated that a variety of programming languages may be used to implement the teachings of embodiments of the invention as described herein.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:对跨分量模式的可用性的约束

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类