System and method for acquiring and projecting images, and use of such a system

文档序号:1879324 发布日期:2021-11-23 浏览:23次 中文

阅读说明:本技术 用于采集和投影图像的系统和方法,以及该系统的应用 (System and method for acquiring and projecting images, and use of such a system ) 是由 亚历山大·塔德·达·科斯塔·马克斯·阿拉巴尔 鲁道夫·巴塞洛斯·泽维尔·菲尔霍 于 2020-02-07 设计创作,主要内容包括:本发明涉及一种用于采集和投影用于集成演播室中使用的图像的系统和方法,该集成演播室包括真实位置和使这些场所部分虚拟的面板,使用从外部生成的图像来创建部分真实和部分虚拟的图像,来显示图像。(The present invention relates to a system and method for acquiring and projecting images for use in an integrated studio comprising real locations and panels that partially virtualize these sites, the images being displayed using externally generated images to create partially real and partially virtual images.)

1. An image acquisition and projection system includes an LED panel, a glass bottom to view a real environment, a camera with motion tracking sensor, an image caption still camera, and software running together.

2. The image acquisition and projection system of claim 1, wherein the LED panel is disposed laterally with respect to the glass bottom, perpendicular to the floor, and rotated with respect to the bottom window.

3. The image acquisition and projection system of claim 2, wherein the LED panels form an angle of more than 90 degrees, preferably 120 degrees.

4. The image acquisition and projection system of claim 1, wherein the LED panel is rotated about 30 degrees.

5. The image acquisition and projection system of claim 1 further comprising one or more still cameras behind the LED side panels.

6. The image acquisition and projection system of claim 5, wherein the still camera transmits the complementary image to a server, which is distorted by the view angle of the camera with motion sensor and forwarded for display on the LED side panel.

7. The image acquisition and projection system of claim 5, wherein the still camera acquires an image distorted and displayed on the LED television according to the viewing angle of a camera with motion sensor located in the studio.

8. The image acquisition and projection system of claim 5, wherein the still camera preferably has a wide-angle lens that acquires images of a complementary interior scene.

9. The image acquisition and projection system of claim 5, wherein the images of the external still cameras will undergo stitching, thereby creating a single image covering the common area between the two cameras.

10. The image acquisition and projection system of claim 5, wherein the still camera creates a panoramic image that is sent to a graphics computing system that projects such an image into a remotely scaled virtual plane for overlay and interaction with a real image.

11. The image acquisition and projection system of claim 1 further comprising two or more cameras with motion sensors located within the studio.

12. The image acquisition and projection system of claim 11, wherein the camera with the motion sensor is capable of tracking motion and facilitating lens changes with motion.

13. The image acquisition and projection system of claim 11, wherein the motion sensor camera acquires information that is sent to a graphics server and interpreted.

14. The image acquisition and projection system of claim 11, wherein the graphics computing system receives tracking data from a camera having a motion sensor and a lens through a connection.

15. The image acquisition and projection system of claim 11, wherein the motion sensor camera further comprises a camera and a lens tracking sensor.

16. The image acquisition and projection system of any of the preceding claims, wherein each graphics computing system calculates a projection distortion of the panoramic image.

17. The image acquisition and projection system according to any of the preceding claims, wherein the video signal is set by a TV system with a calculated delay, wherein the video is displayed in the correct order without delay.

18. Use of an image acquisition and projection system as defined in any of the preceding claims for a physical studio.

19. Use of the image acquisition and projection system of claim 18, wherein the studio comprises the entire glass front wall and other opaque walls.

20. Use of an image acquisition and projection system according to claim 18, wherein the entire glass front wall comprises the application of filters and thin films to balance the image generated on the LEDs with the image received through the glass front wall.

21. A method of image acquisition and projection using the system of claims 1 to 17, comprising the steps of:

acquiring images complementary with a real scene through one, two or more static cameras with wide-angle lenses;

sending the panoramic image to a graphics computing system and projecting it onto a remotely scaled virtual plane to overlay and fit with the real image;

adding a camera and lens tracking sensor, the camera for real-time transmission having intrinsic and extrinsic data input to the graphics computing system;

calculating, by a graphics computing system, a projection distortion of the panoramic image;

setting, by a television system, a video signal, the television system calculating a delay;

the video is displayed in the correct order without delay.

Technical Field

The present invention relates to the field of television production, and more particularly to systems and methods for acquiring and projecting images and using them in TV studios, and for extending images by virtual projection.

Background

Currently, the television and film markets use scene extension techniques in which one part of the studio is physically installed and another part is created graphically.

Currently, two main ways to implement scene expansion are by chroma-keying or by using a projection screen.

The range of scenarios with chroma keying is one of the most common techniques at present. In this case, the scene consists of floors and walls of homogeneous color, usually green or blue, and may or may not include additional furniture. The process of replacing the image on the wall is performed by a computer program. Thus, all that the camera views in hue will be exchanged for images, video or virtual graphics.

Us patent No. 6,091,579 describes a position detection device for a virtual studio object, including providing a transmitting device for the object and using the device in conjunction with a background or standard chroma key background to provide an accurate position of a television camera or to provide the position of another object or person alone.

WO 197028654 describes a video system that uses a main camera to create a virtual shadow of a photographic subject in front of the blue screen. A virtual shadow is created using a second camera positioned as a virtual light source, the system using a chroma-key color system.

The range of scenes with projection television consists of simpler and easily applicable techniques, although quite limited.

In this case, the panels or screens are placed as images, videos or virtual graphics by the presenter in the studio. However, this technique limits the movement of the camera, and any wider movement may display the boundaries of the panel and compromise the composition of the image. In this technique, the application is limited to a camera or minimal movement.

The present invention provides a system that allows real-time execution of integration of virtual images into real images.

Disclosure of Invention

The present invention solves the problems of studio size and moving camera described above and provides a more complete and robust method of transmitting the final image, where the television viewer cannot perceive real or virtual things and all runs in real time, i.e. allowed in live programs.

The proposed solution thus enhances the spatial perception in the studio by the concept of the extent of the scene, using LED tv, glass bottom to view the real environment, camera with motion tracking sensor and software responsible for controlling the image displayed on the screen.

LED large screen

The LED tv is placed horizontally, perpendicular to the floor, and rotated to form an angle of 120 degrees with respect to the background window. However, the angle may vary to be greater than 90 degrees.

LED technology was chosen because of its high resolution relative to other prior art technologies. This resolution is not only higher in the number of pixels, but also high in color intensity. The color intensity is of additional importance due to the impact of such lighting on the real environment. This effect will contribute to the presenter's perception of the insertion into the virtual environment.

Glass background

The background is important to enhance the effect of insertion of the studio in the real environment.

Near the bottom of the glass and hidden behind the LED side panels are two still cameras that capture images complementary to those observed through the bottom of the glass. These complementary images are sent to the servers, which will distort with the view of the studio camera and forward them for display on the side panel. The image on the panel expands the field of view provided by the glass bottom when viewed by the studio camera. However, distortion and out of context can occur if viewed from any other perspective than the camera.

Camera with motion sensor

Two cameras with motion sensors are located in the studio. They have not only a function of tracking movement but also a function of facilitating lens change (zooming and focusing). These data are collected and sent to a graphics server, which interprets them so that the image collected by a static camera near the bottom of the glass is distorted for display in the television according to the viewing angle of the camera located in the studio.

Sensors and tracking systems are well known in the art. Software is used to generate and provide the patterning for the projection on the LED circuit. One embodiment of the present invention allows the construction of a complete virtual studio consisting of LED panels in walls, roof and floor.

The term "virtual insertion" is understood to mean any graphical calculation inserted on an image, such as: symbols, cards, objects, characters and graphic elements in general.

The physical studio consists of the entire glass front wall, the remaining opaque walls being represented in a transparent manner. The effect sought is to reconstruct a 180 degree (panoramic) studio, for which the side walls should be glass or invisible.

The physical studio can be of any desired size and then the LED panels are placed on each side, rotated approximately 30 degrees, the panels being the hypotenuses of the triangle formed between the glass and the wall, with the angle formed between the glass and the wall.

The tilting of the panel avoids aberrations in the image by the angle of illumination of the LEDs, which otherwise may be completely affected by the total reflection effect and appear dark when viewed through the camera, and in addition creates a hidden area for positioning a static camera between the panel and the wall with a wide-angle (170 degree) lens that is glued to the glass for the outer part of the acquisition process (known as stitching).

The floor fabricated on the LED panel allows for changing graphics, displaying images, or completing the current floor itself.

The application of filters and thin films is performed to achieve a brightness level in the glass that is compatible with equalizing the image produced in the LED and the image received through the front wall glass.

Even as an integral part of the vehicle interior components, sensor sticks have been used for camera tracking purposes.

The method of the final effect comprises the steps of:

acquiring images through one or two or more cameras with wide-angle lenses to supplement scenes;

the images of these cameras undergo a process called stitching, which consists in creating a single image covering the common area between the two cameras, thus generating a panoramic shot image;

the panoramic image is sent to a graphics computing system that projects it into a virtual plane that is spaced and scaled to overlay and fit the actual image;

a camera and lens tracking sensor are added so that in real time, the camera for transmission has intrinsic and extrinsic data and informs the graphics computing system;

for each transmitting camera, the graphics computing system is required to receive all of the above information;

each graphics computing system calculates the projection distortion of the panoramic image projected onto the LED panel while always being the viewing angle of the current camera. In this way, the image of the LED is fused with the actual image of the viewing window;

all video signals need to be adjusted by the television system so that they have a calculated delay so that the video is displayed in the correct order without delay.

The graphics computing system is installed on a server located in the central processing area and receives camera and lens tracking data through a connection such as an ethernet.

The studio may consist of two side LED panels, optionally LED panels on the floor and a glass window at the bottom. The side panels are perpendicular to the floor and rotated through an angle of 120 degrees (which should be greater than 90 degrees for proper operation) relative to the bottom window.

Behind each side panel is one or more still cameras with one or more wide angle lenses, with an angle of view approaching 180 degrees, directed through the glass window to the outside of the studio. The video captured by each of these cameras is inserted into the processor and sent to the respective side panel. The processor receives position and shot data from the studio master camera in addition to video from the camera behind the panel.

In an alternative embodiment, only one still camera may be used to capture the external image.

In alternative embodiments, the still camera may have one or more wide-angle lenses.

The projection distortion process is performed by an asymmetric frustum projection technique, which is available and performed in all programming and programming frameworks for real-time graphical computations known in the art.

Asymmetric background distortion is also performed by commercially available software, represented by the functions described below.

Since the system receives camera position data relative to the studio, i.e. identifies the cameras on the studio, creating virtual cameras with the same characteristics and variations of shots, cameras and positions occurring in real time, these data are copied and fed to both virtual systems simultaneously. The first system (system 1) addresses all problems associated with LED screen distortion. The second system (system 2) applies augmented reality elements.

The LED panels have processing time for displaying the image, so once the system 2 responsible for the virtual insertion needs a delay, the system 1 projects an image complementary to the glass on the LEDs before the system 2 acquires the studio image for the virtual insertion. The delay, called aliasing, is variable and in the worst case lies within 3 frames.

To be able to calculate the projection distortion and other resources, a server in the central technology of the installation system is fed with tracking data from the cameras and lenses, e.g. via ethernet or any other compatible means, and with video, e.g. via video cables. The system is divided into a plurality of modules: the projection distortion of the panoramic image is calculated by a part of the system, which is commercially available software; and the 3D elements inserted into the studio are generated by the system and will not be described in detail here.

The perception of the virtual window is due to the fact that there is temporal coherence between the real image seen through the glass and the completion of the studio by the LED panel. The described effect produces a perception of a real scene that is much larger than the available physical space to intervene in a fluent manner. .

From the foregoing it will be observed that numerous modifications and variations can be effectuated without departing from the true spirit and scope of the novel concepts of the present invention. It is to be understood that no limitation with respect to the specific embodiments illustrated is intended or should be inferred. The disclosure is intended to cover all such modifications as fall within the scope of the invention.

Drawings

Fig. 1 shows a scene extension using an LED panel.

Fig. 2 shows a perspective view of scene graphic elements, such as a glass bottom and LED panel.

Fig. 3 shows a top view of the studio and the shape of the panels and scene graphical elements.

Figure 4 shows an information graph and animated virtual character inserted in a video through a graphics server.

Fig. 5 shows the physical layout of a studio with anchor points for the panels.

Fig. 6 illustrates an embodiment of an asymmetric frustum projection technique.

Figure 7 shows the distortion of an asymmetric frustum.

10页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:具有可旋转传感器的相机

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类