Information processing apparatus, control method thereof, and computer-readable storage medium

文档序号:1630850 发布日期:2020-01-14 浏览:25次 中文

阅读说明:本技术 信息处理装置及其控制方法以及计算机可读存储介质 (Information processing apparatus, control method thereof, and computer-readable storage medium ) 是由 梅村直树 于 2019-06-26 设计创作,主要内容包括:本发明公开一种信息处理装置及其控制方法以及计算机可读存储介质。一种信息处理装置,包括:设置单元,用于针对基于从多个照相机获得的多视点图像的虚拟视点图像生成,设置第一虚拟视点;以及生成单元,用于基于所述设置单元设置的所述第一虚拟视点,来生成表示在位置和方向中的至少一者上与所述设置单元设置的所述第一虚拟视点不同、且对应于与所述第一虚拟视点共同的时刻的第二虚拟视点的视点信息。(The invention discloses an information processing apparatus, a control method thereof, and a computer-readable storage medium. An information processing apparatus comprising: a setting unit configured to set a first virtual viewpoint for virtual viewpoint image generation based on multi-viewpoint images obtained from a plurality of cameras; and a generation unit configured to generate viewpoint information representing a second virtual viewpoint that is different in at least one of position and direction from the first virtual viewpoint set by the setting unit and that corresponds to a time point common to the first virtual viewpoint, based on the first virtual viewpoint set by the setting unit.)

1. An information processing apparatus comprising:

a setting unit configured to set a first virtual viewpoint for virtual viewpoint image generation based on multi-viewpoint images obtained from a plurality of cameras; and

a generating unit configured to generate viewpoint information indicating a second virtual viewpoint that is different in at least one of position and direction from the first virtual viewpoint set by the setting unit and that corresponds to a time point common to the first virtual viewpoint, based on the first virtual viewpoint set by the setting unit.

2. The information processing apparatus according to claim 1, wherein the setting unit sets a first virtual viewpoint in time series, and,

the generation unit generates viewpoint information representing the second virtual viewpoint set in a time series to maintain a relationship between the first virtual viewpoint and the second virtual viewpoint in a distance and a line-of-sight direction.

3. The information processing apparatus according to claim 1, wherein the first virtual viewpoint and the second virtual viewpoint have a common point of regard.

4. The information processing apparatus according to claim 3, wherein the viewpoint information representing the second virtual viewpoint is generated by rotating the first virtual viewpoint by a predetermined angle around a vertical line passing through the gaze point as a rotation axis.

5. The information processing apparatus according to claim 3 or 4, wherein the line-of-sight direction of the first virtual viewpoint is determined by a user specifying a viewpoint position of the first virtual viewpoint and a position of the gazing point.

6. The information processing apparatus according to any one of claims 1 to 4, further comprising: an image generating unit configured to generate a virtual viewpoint image corresponding to the first virtual viewpoint set by the setting unit and a virtual viewpoint image corresponding to the second virtual viewpoint indicated by the viewpoint information generated by the generating unit.

7. An information processing apparatus comprising:

a setting unit configured to set a first virtual viewpoint for virtual viewpoint image generation based on multi-viewpoint images obtained from a plurality of cameras; and

a generating unit configured to generate viewpoint information indicating a second virtual viewpoint that is different in at least one of position and direction from the first virtual viewpoint set by the setting unit and that corresponds to a time point common to the first virtual viewpoint, based on a position of an object included in the multi-viewpoint image.

8. The information processing apparatus according to claim 7, wherein the generation unit generates the viewpoint information representing the second virtual viewpoint determined based on a positional relationship between a first object and a second object included in the multi-viewpoint image.

9. The information processing apparatus according to claim 8, wherein the generation unit determines the second virtual viewpoint based on a positional relationship between the first object and the second object, and then causes the second virtual viewpoint to follow the first object so as to maintain a relationship with the first object in a position and a line-of-sight direction.

10. The information processing apparatus according to claim 8, wherein the generation unit generates the viewpoint information so as to photograph the first object and the second object in a field of view of the second virtual viewpoint.

11. The information processing apparatus according to claim 10, wherein a gaze point of the second virtual viewpoint is set at a middle between the first object and the second object.

12. The information processing apparatus according to any one of claims 7 to 10, wherein the first object and the second object are objects included in a virtual viewpoint image corresponding to the first virtual viewpoint set by the setting unit, and the first object is an object closest to the first virtual viewpoint.

13. The information processing apparatus according to any one of claims 8 to 11, further comprising: a specifying unit configured to specify the second object based on a user operation.

14. The information processing apparatus according to any one of claims 7 to 11, further comprising: an obtaining unit that obtains a position of an object included in the multi-viewpoint image from material data used to generate a virtual-viewpoint image.

15. The information processing apparatus according to any one of claims 7 to 11, further comprising: an image generating unit configured to generate a virtual viewpoint image corresponding to the first virtual viewpoint set by the setting unit and a virtual viewpoint image corresponding to the second virtual viewpoint indicated by the viewpoint information generated by the generating unit.

16. A control method of an information processing apparatus, comprising:

setting a first virtual viewpoint generated with respect to a virtual viewpoint image based on multi-viewpoint images obtained from a plurality of cameras; and

based on the set first virtual viewpoint, viewpoint information representing a second virtual viewpoint that differs from the set first virtual viewpoint in at least one of position and direction and that corresponds to a time point common to the first virtual viewpoint is generated.

17. A control method of an information processing apparatus, comprising:

setting a first virtual viewpoint for virtual viewpoint image generation based on multi-viewpoint images obtained from a plurality of cameras; and

generating viewpoint information representing a second virtual viewpoint that is different from the set first virtual viewpoint in at least one of position and direction and that corresponds to a time point common to the first virtual viewpoint, based on a position of an object included in the multi-viewpoint image.

18. The method of claim 16 or 17, further comprising: generating a virtual viewpoint image corresponding to the set first virtual viewpoint and a virtual viewpoint image corresponding to the second virtual viewpoint represented by the generated viewpoint information.

19. A computer-readable medium storing a program for causing a computer to execute the steps of the control method of the information processing apparatus defined in claim 16 or 17.

Technical Field

The present invention relates to an information processing apparatus regarding generation of a virtual viewpoint image, a control method thereof, and a computer-readable storage medium.

Background

Currently, a technique of generating a virtual viewpoint image using a plurality of viewpoint images obtained by installing a plurality of cameras at different positions and performing synchronous photographing from a plurality of viewpoints has been noted. For example, the technique of generating a virtual viewpoint image allows a user to view a collection of soccer balls or basketballs from various angles, and can give him/her a feeling of being highly immersive.

A virtual viewpoint image based on a plurality of viewpoint images is generated by collecting images taken by a plurality of cameras into an image processing unit such as a server and performing processing such as three-dimensional model generation and rendering by the image processing unit. The generation of the virtual visual point image requires setting a virtual visual point. For example, the content creator generates a virtual viewpoint image by moving the position of the virtual viewpoint over time. Even for a single moment image, various virtual viewpoints may be necessary according to the taste and preference of the viewer. In japanese patent laid-open No. 2015-187797, a plurality of viewpoint images and free viewpoint image data including metadata representing a recommended virtual viewpoint are generated. The user can easily set various virtual viewpoints using metadata included in the free viewpoint image data.

When a virtual visual point image is provided to a plurality of viewers of different tastes, or when a viewer wants to view both a virtual visual point image at a given visual point and a virtual visual point image at another visual point, a plurality of virtual visual point images corresponding to a plurality of virtual visual points at the same time are generated. However, if a plurality of time-series virtual viewpoints are separately set to generate a plurality of virtual viewpoint images, setting of the virtual viewpoints takes a lot of time, similar to the conventional art. The technique disclosed in japanese patent laid-open No. 2015-187797 reduces the labor for setting a single virtual viewpoint. However, when setting a plurality of virtual viewpoints, the setting is still troublesome.

Disclosure of Invention

The present invention provides a technique that enables easy setting of a plurality of virtual viewpoints generated with respect to a virtual viewpoint image.

According to an aspect of the present invention, there is provided an information processing apparatus including: a setting unit configured to set a first virtual viewpoint for virtual viewpoint image generation based on multi-viewpoint images obtained from a plurality of cameras; and a generation unit configured to generate viewpoint information representing a second virtual viewpoint that is different in at least one of position and direction from the first virtual viewpoint set by the setting unit and that corresponds to a time point common to the first virtual viewpoint, based on the first virtual viewpoint set by the setting unit.

According to another aspect of the present invention, there is provided an information processing apparatus comprising: a setting unit configured to set a first virtual viewpoint for virtual viewpoint image generation based on multi-viewpoint images obtained from a plurality of cameras; and a generation unit configured to generate viewpoint information indicating a second virtual viewpoint that is different in at least one of position and direction from the first virtual viewpoint set by the setting unit and that corresponds to a time point common to the first virtual viewpoint, based on a position of an object included in the multi-viewpoint image.

According to still another aspect of the present invention, there is provided a method of controlling an information processing apparatus, comprising: setting a first virtual viewpoint for virtual viewpoint image generation based on multi-viewpoint images obtained from a plurality of cameras; and generating viewpoint information representing a second virtual viewpoint that is different from the set first virtual viewpoint in at least one of position and direction and that corresponds to a time point common to the first virtual viewpoint, based on the set first virtual viewpoint.

According to still another aspect of the present invention, there is provided a method of controlling an information processing apparatus, comprising: setting a first virtual viewpoint for virtual viewpoint image generation based on multi-viewpoint images obtained from a plurality of cameras; and generating viewpoint information representing a second virtual viewpoint that is different from the set first virtual viewpoint in at least one of position and direction and that corresponds to a time point common to the first virtual viewpoint, based on a position of an object included in the multi-viewpoint image.

According to still another aspect of the present invention, there is provided a computer-readable medium storing a program for causing a computer to execute the steps of the control method of the information processing apparatus described above.

Other features of the present invention will become apparent from the following detailed description of exemplary embodiments with reference to the attached drawings.

Drawings

Fig. 1 is a block diagram showing an example of a functional structure of an image generating apparatus according to an embodiment.

Fig. 2 is a schematic diagram showing an example of the configuration of virtual viewpoints according to the first embodiment.

Fig. 3A and 3B are views illustrating examples of a viewpoint trajectory.

Fig. 4A and 4B are flowcharts showing the processing of another viewpoint generating unit and virtual viewpoint image generating unit according to the first embodiment.

Fig. 5 is a schematic diagram showing an example of the configuration of viewpoints (virtual cameras) according to the second embodiment.

Fig. 6A is a view showing an example of the configuration of viewpoints (virtual cameras) three-dimensionally.

Fig. 6B is a view showing viewpoint information.

Fig. 7 is a view for explaining a viewpoint (virtual camera) configuration method according to the second embodiment.

Fig. 8 is a flowchart showing the processing of another viewpoint generating unit according to the second embodiment.

Fig. 9 is a view for explaining another example of the configuration of viewpoints (virtual cameras) according to the second embodiment.

Fig. 10A and 10B are views showing examples of virtual viewpoint images from the viewpoints shown in fig. 9.

Fig. 11A is a view showing a virtual visual point image generating system.

Fig. 11B is a block diagram showing an example of the hardware configuration of the image generating apparatus.

Detailed Description

Several embodiments of the present invention will now be described with reference to the accompanying drawings. In this specification, an image is a generic term of "video", "still image", and "moving image".

< first embodiment >

Fig. 11A is a block diagram showing an example of the configuration of the virtual visual point image generation system according to the first embodiment. In fig. 11A, a plurality of cameras 1100 are connected to a local area network (LAN 1101). The server 1102 stores a plurality of images obtained by the camera 1100 as a multi-viewpoint image 1104 in the storage device 1103 via the LAN 1101. The server 1102 generates material data 1105 (including a three-dimensional object model, a position of a three-dimensional object, a texture, and the like) for generating a virtual viewpoint image from the multi-viewpoint image 1104, and stores it in the storage device 1103. The image generating apparatus 100 obtains the material data 1105 (if necessary, the multi-viewpoint image 1104) from the server 1102 via the LAN 1101, and generates a virtual viewpoint image.

Fig. 11B is a block diagram showing an example of the hardware configuration of an information processing apparatus serving as the image generation apparatus 100. In the image generation apparatus 100, the CPU151 realizes various processes in the image generation apparatus 100 by executing programs stored in the ROM 152 or the RAM 153 serving as a main memory. The ROM 152 is a read-only nonvolatile memory, and the RAM 153 is a random access volatile memory. The network I/F154 is connected to the LAN 1101, and performs communication with, for example, a server 1102. The input device 155 is a device such as a keyboard or a mouse, and accepts operation input from a user. The display device 156 provides various displays under the control of the CPU 151. The external storage device 157 is constituted by a nonvolatile memory such as a hard disk or a silicon disk, and stores various data and programs. The bus 158 connects the above units and performs data transfer.

Fig. 1 is a block diagram showing an example of the functional configuration of an image generation apparatus 100 according to the first embodiment. Note that each unit shown in fig. 1 may be realized by the CPU151 executing a predetermined program, realized by dedicated hardware, or realized by cooperation between software and hardware.

The viewpoint input unit 101 accepts a user input for setting a virtual viewpoint of a virtual camera. The virtual viewpoint designated by the input accepted by the viewpoint input unit 101 will be referred to as an input viewpoint. User input for specifying an input viewpoint is performed via the input device 155. The other viewpoint generating unit 102 generates a virtual viewpoint different from the input viewpoint so as to set the position of the other virtual camera based on the input viewpoint specified by the user. The virtual viewpoint generated by the other viewpoint generating unit 102 will be referred to as another viewpoint. The material data obtaining unit 103 obtains material data 1105 used to generate a virtual visual point image from the server 1102. Based on the input viewpoint from the viewpoint input unit 101 and another viewpoint from another viewpoint generating unit 102, the virtual viewpoint image generating unit 104 generates virtual viewpoint images corresponding to the respective virtual viewpoints using the material data obtained by the material data obtaining unit 103. The display control unit 105 performs control to display an image of the material data (for example, one image of the multi-viewpoint images 1104) obtained by the material data obtaining unit 103 and a virtual viewpoint image generated by the virtual viewpoint image generating unit 104 on the display device 156. The data storage unit 107 stores the virtual visual point image generated by the virtual visual point image generation unit 104, information of a visual point transmitted from the visual point input unit 101 or another visual point generation unit 102, and the like using the external storage device 157. Note that the configuration of the image generating apparatus 100 is not limited to the one shown in fig. 1. For example, the viewpoint input unit 101 and the other viewpoint generating unit 102 may be installed in an information processing apparatus other than the image generating apparatus 100.

Fig. 2 is a schematic diagram showing an example of the configuration of a virtual viewpoint (virtual camera). Fig. 2 shows the positional relationship between an offensive player, a defensive player, and a virtual camera in, for example, a soccer game. In fig. 2, 2a is a view of the arrangement of the player, the ball, and the virtual camera when viewed from the side, and 2b is a view of the player, the camera, and the ball when viewed from the top. In fig. 2, an attacker 201 controls a ball 202. Defensive player 203 is a player of a team attempting to block attacker 201 from attacking and facing attacker 201. The virtual camera 204 is a virtual camera corresponding to an input viewpoint 211 set by a user (e.g., a content creator), is arranged behind the attacker 201, and is oriented from the attacker 201 toward the defender 203. The position, direction, posture, angle of view, and the like of the virtual camera are set as the viewpoint information of the input viewpoint 211 (virtual camera 204), but the viewpoint information is not limited thereto. For example, the direction of the virtual camera may be set by specifying the position of the virtual camera and the position of the gaze point.

The virtual camera 205 is a virtual camera corresponding to another viewpoint 212 set based on the input viewpoint 211, and is configured to face the virtual camera 204. In the example of fig. 2, the virtual camera 205 is arranged behind the defender 203, and the line-of-sight direction of the camera is the direction from the defender 203 to the attacker 201. The virtual camera 204 is configured based on an input viewpoint 211 set by a content creator manually inputting parameters for determining, for example, a camera position and direction. In contrast, another viewpoint 212 (virtual camera 205) is automatically configured by another viewpoint generating unit 102 in response to the configuration input viewpoint 211 (virtual camera 204). The gaze point 206 is a point at which the line of sight of each of the virtual cameras 204 and 205 intersects the ground. In this embodiment, the gaze point of the input viewpoint 211 and the gaze point of the further viewpoint 212 are common.

In 2a of fig. 2, the distance between the input viewpoint 211 and the attacker 201 is h 1. The height of each of the input viewpoint 211 and the other viewpoint 212 from the ground is h 2. The distance between the gazing point 206 and the position of the perpendicular line from each of the input viewpoint 211 and the other viewpoint 212 to the ground is h 3. The viewpoint position and the line-of-sight direction of the other viewpoint 212 are obtained by rotating the viewpoint position and the line-of-sight direction of the input viewpoint 211 by 180 ° around it with the perpendicular 213 passing through the gazing point 206 as an axis.

Fig. 3A is a view illustrating the trajectories of the input viewpoint 211 and the other viewpoint 212 shown in fig. 2. The trajectory (camera path) of the input viewpoint 211 is a curve 301 passing through points a1, a2, A3, a4, and a5, and the trajectory (camera path) of the other viewpoint 212 is a curve 302 passing through points B1, B2, B3, B4, and B5. Fig. 3B is a view showing the positions of the input viewpoint 211 and the other viewpoint 212 at respective times, in which the abscissa represents time. At times T1 to T5, the input viewpoint 211 is located at a1 to a5, and the other viewpoint 212 is located at B1 to B5. For example, a1 and B1 represent the positions of the input viewpoint 211 and the other viewpoint 212 at the same time T1.

In fig. 3A, the directions of straight lines connecting points a1 and B1, points a2 and B2, points A3 and B3, points a4 and B4, and points a5 and B5 represent the directions of the line of sight from the input viewpoint 211 and the other viewpoint 212 at times T1 to T5. That is, in this embodiment, the line of sight of the two virtual viewpoints (virtual cameras) is oriented in a direction in which they always face each other at each time. This also applies to the distance between the two virtual viewpoints. The distance between the input viewpoint 211 and the other viewpoint 212 is set to be constant at every moment.

Next, the operation of the other viewpoint generating unit 102 will be described. Fig. 4A is a flowchart showing a process of obtaining viewpoint information by the viewpoint input unit 101 and another viewpoint generating unit 102. In step S401, the viewpoint input unit 101 determines whether the content creator has input viewpoint information of the input viewpoint 211. If the point of sight input unit 101 determines in step S401 that the content creator has input point of sight information, the process advances to step S402. In step S402, the viewpoint input unit 101 supplies viewpoint information of the input viewpoint 211 to the other viewpoint generating unit 102 and the virtual viewpoint image generating unit 104. In step S403, the other viewpoint generating unit 102 generates another viewpoint based on the viewpoint information of the input viewpoint. For example, as described with reference to fig. 2, the other viewpoint generating unit 102 generates the other viewpoint 212 based on the input viewpoint 211 and generates viewpoint information thereof. In step S404, the other viewpoint generating unit 102 supplies the generated viewpoint information of the other viewpoint to the virtual viewpoint image generating unit 104. In step S405, the other viewpoint generating unit 102 determines whether reception of viewpoint information from the viewpoint input unit 101 has ended. If the other viewpoint generating unit 102 determines that the reception of viewpoint information has ended, the flow ends. If another viewpoint generating unit 102 determines that viewpoint information is being received, the process returns to step S401.

Through the above-described processing, the other viewpoint generating unit 102 generates another viewpoint in a time series along with the viewpoint input in the time series from the viewpoint input unit 101. For example, when a movement is input in order to draw the input viewpoint 211 of the curve 301 shown in fig. 3A, the other viewpoint generating unit 102 generates the other viewpoint 212 to draw the curve 302 following the curve 301. The virtual visual point image generating unit 104 generates a virtual visual point image from the visual point information from the visual point input unit 101 and the other visual point information from the other visual point generating unit 102.

Next, the virtual visual point image generation processing of the virtual visual point image generation unit 104 will be described. Fig. 4B is a flowchart showing a process of the virtual visual point image generation unit 104 generating a virtual visual point image. In step S411, the virtual visual point image generation unit 104 determines whether visual point information of the input visual point 211 has been received from the visual point input unit 101. If the virtual visual point image generation unit 104 determines in step S411 that it has received the visual point information, the process advances to step S412. If the virtual visual point image generating unit 104 determines that it has not received the visual point information, the process returns to step S411. In step S412, the virtual visual point image generation unit 104 configures the virtual camera 204 based on the received visual point information, and generates a virtual visual point image to be captured by the virtual camera 204.

In step S413, the virtual visual point image generating unit 104 determines whether visual point information of another visual point 212 has been received from the other visual point generating unit 102. If the virtual visual point image generation unit 104 determines in step S413 that it has received visual point information of another visual point 212, the process advances to step S414. If the virtual visual point image generating unit 104 determines that it has not received the visual point information of another visual point 212, the process returns to step S413. In step S414, the virtual visual point image generation unit 104 configures the virtual camera 205 based on the visual point information received in step S413, and generates a virtual visual point image to be captured by the virtual camera 205. In step S415, the virtual visual point image generation unit 104 determines whether reception of visual point information from each of the visual point input unit 101 and the other visual point generation unit 102 has ended. If the virtual visual point image generation unit 104 determines that the reception of the visual point information is completed, the processing of the flowchart ends. If the virtual visual point image generating unit 104 determines that the reception of the visual point information has not been completed, the process returns to step S411.

Although steps S412 and S414, which are processes of generating a virtual visual point image, are performed in time series in the flowchart of fig. 4B, the present invention is not limited thereto. A plurality of virtual visual point image generation units 104 may be provided in correspondence with a plurality of virtual visual points to execute the virtual visual point image generation processing in steps S412 and S414 in parallel. Note that the virtual visual point image generated in step S412 is an image that can be captured by the virtual camera 204. Similarly, the virtual visual point image generated in step S414 is an image that can be captured by the virtual-to-camera 205.

Next, generation of another viewpoint 212 (virtual camera 205) with respect to the input viewpoint 211 (virtual camera 204) (step S403) will be further explained with reference to fig. 2, 3A, and 3B. In this embodiment, when the content creator designates one input viewpoint 211, another viewpoint 212 is set based on the input viewpoint 211 according to a predetermined rule. As an example of the predetermined rule, the following configuration will be described in the present embodiment: wherein the common gaze point 206 is used for the input viewpoint 211 and the other viewpoint 212, and the other viewpoint 212 is generated by rotating the input viewpoint 211 by a predetermined angle with a vertical line 213 passing through the gaze point 206 as a rotation axis.

The content creator configures input viewpoint 211 at a distance h1 behind attacker 201 and at a height h2 above attacker 201. At time T1, the line of sight direction of input viewpoint 211 is oriented in a direction toward defensive person 203. In this embodiment, the intersection of the ground and the line of sight of the input viewpoint 211 is used as the gazing point 206. In contrast, at time T1, in step S403 in fig. 4A, another viewpoint 212 is generated by another viewpoint generating unit 102. In this embodiment, the other viewpoint generating unit 102 obtains the other viewpoint 212 by rotating the position of the input viewpoint 211 by a predetermined angle (180 ° in this embodiment) with the perpendicular line 213, which is a line passing through the gazing point 206 and perpendicular to the ground, as the rotation axis. As a result, the other viewpoint 212 is arranged within a three-dimensional range of height h2 and distance h3 from the gazing point 206.

Note that, in this embodiment, the gazing point 206 is set on the ground, but is not limited thereto. For example, when the gaze direction of the input viewpoint 211 indicated by the input gaze information is parallel to the ground, the gaze point may be set at a point at a height h2 on the perpendicular line 213 passing through the gaze point 206. The other viewpoint generating unit 102 generates another viewpoint from the input viewpoints set in time series to maintain a relationship between the input viewpoint and the other viewpoint in the distance and the line-of-sight direction. Therefore, the method of generating the other viewpoint 212 from the input viewpoint 211 is not limited to the above-described method. For example, the gaze point of the input viewpoint 211 and the gaze point of the other viewpoint 212 may be separately set.

In the example of fig. 3A, the curve 301 represents the trajectory of the input viewpoint 211 after the elapse of time from the time T1, and at the times T2, T3, T4, and T5, the positions of the input viewpoint 211 (the positions of the virtual camera 204) are a2, A3, a4, and a5, respectively. Similarly, at times T2, T3, T4, and T5, the positions of the other viewpoint 212 (the position of the virtual camera 205) are B2, B3, B4, and B5 on the curve 302, respectively. The positional relationship between the input viewpoint 211 and the other viewpoint 212 maintains a relative state at time T1, and at each time, the input viewpoint 211 and the other viewpoint 212 are arranged at positions symmetrical with respect to a vertical line 213 passing through the gazing point 206. The position of the other viewpoint 212 (the position of the virtual camera 205) is automatically configured based on the input viewpoint 211 set by the user input so that the positional relationship is established at each of the timings T1 to T5. Needless to say, the position of the other viewpoint is not limited to the above positional relationship, and the number of the other viewpoints is not limited to one.

In the first embodiment, the virtual camera 205 is arranged at a position obtained by rotating it by 180 ° around the vertical line 213 passing through the gazing point 206 as an axis based on the viewpoint information (for example, viewpoint position and line-of-sight direction) of the input viewpoint 211 created by the content creator, but is not limited thereto. In fig. 2, parameters of a viewpoint height h2, a horizontal position h3, and a line-of-sight direction, which determine the position of another viewpoint 212, may be changed according to a certain rule. For example, the height and distance of the other viewpoint 212 from the gaze point 206 may be different from the height and distance of the input viewpoint 211. Further, other viewpoints may be respectively arranged at positions obtained by rotating the input viewpoint 211 around the vertical line 213 by 120 ° each time. Another viewpoint may be generated at the same location as the input viewpoint, in a different pose and/or perspective.

As described above, according to the first embodiment, when a virtual visual point image is generated, an input visual point is set by a user input, and another visual point different from the input visual point in at least one of a position and a direction is automatically set. According to the first embodiment, a plurality of virtual visual point images corresponding to a plurality of virtual visual points at a common timing can be easily obtained.

< second embodiment >

In the first embodiment, the configuration in which another viewpoint (for example, the viewpoint configuring the virtual camera 205) is automatically set based on the input viewpoint (for example, the viewpoint of the virtual camera 204) set by the user has been described. In the second embodiment, another viewpoint is automatically set using the position of the object. Note that the hardware configuration and the functional configuration of the virtual visual point image generation system and the image generation apparatus 100 in the second embodiment are the same as those in the first embodiment (fig. 11A, 11B, and fig. 1). Note that another viewpoint generating unit 102 may receive material data from the material data obtaining unit 103.

Fig. 5 is a schematic diagram showing a simulation of a soccer game, and is a view showing the configuration of a viewpoint (virtual camera) when viewing a soccer field from the top. In fig. 5, the blank square objects and the shaded objects represent soccer players, and the presence or absence of shading represents the teams to which they belong. In fig. 5, player a holds the ball. The content creator sets an input viewpoint 211 behind the player a (the side opposite to the position of the ball), and installs a virtual camera 501 based on the input viewpoint 211. Players B to G of the team of player A and the opponent team are located around player A. Another viewpoint 212a (virtual camera 502) is arranged behind the player B, another viewpoint 212B (virtual camera 503) is arranged behind the player F, and another viewpoint 212c (virtual camera 504) is arranged at a position where all the players a to G can be viewed from the side. Note that the input viewpoint 211 side of players B and F is referred to as the front, and the opposite side is referred to as the rear.

Fig. 6A is a view three-dimensionally showing the soccer field in fig. 5. In fig. 6A, one of the four corners of the soccer field is defined as the origin of three-dimensional coordinates, the long side direction of the soccer field is defined as the x-axis, the short side direction is defined as the y-axis, and the height direction is defined as the z-axis. Fig. 6A shows only players a and B among the players shown in fig. 5, and shows an input viewpoint 211 (virtual camera 501) and another viewpoint 212a (virtual camera 502) among the viewpoints (virtual cameras) shown in fig. 5. Fig. 6B is a view illustrating viewpoint information of the input viewpoint 211 and another viewpoint 212a illustrated in fig. 6A. The viewpoint information of the input viewpoint 211 includes coordinates (x1, y1, z1) of a viewpoint position and coordinates (x2, y2, z2) of a gazing point position. The viewpoint information of the other viewpoint 212a includes coordinates (x3, y3, z3) of the viewpoint position and coordinates (x4, y4, z4) of the gazing point position.

Fig. 7 shows three-dimensional coordinates (fig. 6B) of the viewpoint position and the gaze position of the input viewpoint 211 (virtual camera 501) and the other viewpoint 212a (virtual camera 502) drawn in the bird's eye view shown in fig. 5. The input viewpoint 211 (virtual camera 501) is oriented in the direction in which the player a connects to the ball, and the other viewpoint 212a (virtual camera 502) is oriented in the direction in which the player B connects to the player a.

Fig. 8 is a flowchart showing the generation processing of the other viewpoint 212a by the other viewpoint generating unit 102 according to the second embodiment. In step S801, the other viewpoint generating unit 102 determines whether viewpoint information of the input viewpoint 211 is received from the viewpoint input unit 101. If, in step S801, another viewpoint generating unit 102 determines that viewpoint information has been received, the process advances to step S802. If another viewpoint generating unit 102 determines that it has not received viewpoint information, the process repeats step S801. In step S802, the other viewpoint generating unit 102 determines whether the coordinates of the players a to G (the coordinates of the objects) included in the material data have been obtained from the material data obtaining unit 103. If the other viewpoint generating unit 102 determines that it has obtained the material data, the process advances to step S803. If the other viewpoint generating unit 102 determines that it has not obtained material data, the process repeats step S802.

In step S803, the other viewpoint generating unit 102 generates the viewpoint position and gazing point position (other viewpoint) of the virtual camera 502 based on the viewpoint information obtained in step S801 and the material data (coordinates of the object) obtained in step S802. In step S804, the other viewpoint generating unit 102 determines whether reception of viewpoint information from the viewpoint input unit 101 has ended. If the other viewpoint generating unit 102 determines that the reception of viewpoint information has ended, the flowchart ends. If another viewpoint generating unit 102 determines that viewpoint information is being received, the process returns to step S801.

The generation of another viewpoint in step S803 will be described in detail. As shown in fig. 7, the input viewpoint 211 set by the content creator is located at the coordinates (x1, y1, z1) behind player a, and the coordinates of the gaze point position of the input viewpoint 211 are (x2, y2, z 2). A position where the line of sight in the line of sight direction set for the input viewpoint 211 intersects a plane of a predetermined height (e.g., the ground) is defined as the gazing point 206. Alternatively, the content creator may designate gaze point 206a to set the viewing direction to connect input viewpoint 211 and gaze point 206. The other viewpoint generating unit 102 according to the embodiment generates another viewpoint based on the positional relationship between two objects (in this example, players a and B) included in the multi-viewpoint image 1104. In this embodiment, after the other viewpoint thus generated is determined as the initial viewpoint, the other viewpoint is made to follow the position of the object (player a) so as to maintain the relationship of the position and the sight-line direction with the other object (player a).

Next, an initial viewpoint determining method will be explained. First, the other viewpoint generating unit 102 obtains viewpoint information of the input viewpoint 211 including coordinates (x1, y1, z1) of the viewpoint position and coordinates (x2, y2, z2) of the gaze position from the viewpoint input unit 101. Then, the other viewpoint generating unit 102 obtains the position coordinates of each player (information of the object position in the material data) from the material data obtaining unit 103. For example, the position coordinates of player a are (xa, ya, za). The value za in the height direction in the position coordinates of the player a may be, for example, the height or height of the center of the face of the player. When the height is used, the height of each player is registered in advance.

In this embodiment, another viewpoint 212a (virtual camera 502) is generated behind player B. The other viewpoint generating unit 102 determines the gaze point of the other viewpoint 212a based on the position of the player a closest to the input viewpoint 211. In this embodiment, the position of the fixation point on the xy plane is set as the position (xa, ya) of the player a on the xy plane, and the position in the z direction is set as the height from the ground. In this example, the coordinates of the gazing point position are set to (x4, y4, z4) ═ (xa, ya, 0). The other viewpoint generating unit 102 sets, as the viewpoint position of the other viewpoint 212a, a position that is separated from the position of the player B by a predetermined distance on a line connecting the position coordinates of the player B and the coordinates (x4, y4, z4) of the gazing point position of the other viewpoint 212 a. In fig. 7, the coordinates (x3, y3, z3) are set as the viewpoint position of another viewpoint 212a (virtual camera 502). The predetermined distance may be a distance set in advance by the user, or may be determined by another viewpoint generating unit 102 based on a positional relationship (e.g., distance) between the players a and B.

After determining the viewpoint position of the other viewpoint 212a based on the positional relationship between the players a and B, and determining the gazing point position based on the position coordinates of the player a in this manner, the distance between the viewpoint 212a and the player a and the sight-line direction is fixed. That is, after the viewpoint position and the gazing point position of the other viewpoint 212a are determined according to the setting of the input viewpoint 211, the distance and direction of the other viewpoint 212a with respect to the gazing point determined from the position coordinates of the player a are fixed. With this setting, even if the position coordinates of the players a and B change with time, the positional relationship between the other viewpoint 212a (virtual camera 502) and the player a is maintained. After viewpoint information of the other viewpoint 212a is determined from the input viewpoint 211 (virtual camera 501) and the position coordinates of the players a and B, the viewpoint position and the gaze position of the other viewpoint 212a (virtual camera 502) are determined from the position coordinates of the player a.

Note that the other viewpoint generating unit 102 needs to specify two objects of players a and B in order to generate the other viewpoint 212 a. Both players a and B are objects included in the virtual viewpoint image from the input viewpoint 211. For example, an object closest to the input viewpoint 211 is selected as the player a, and the player B may be specified by the user selecting an object from the virtual viewpoint image of the input viewpoint 211. Although the distance between the other viewpoint 212a and the player a and the visual line direction is fixed in the above description, the present invention is not limited thereto. For example, the process of determining another viewpoint 212a based on the positions of players a and B (the process of determining the initial viewpoint described above) may be continued. Alternatively, an object for generating another viewpoint (an object corresponding to player B) may be selected based on the attribute of the object. For example, a team to which each object belongs may be determined based on the uniform of the object, and an object belonging to the other team or the team of player a may be selected as player B from the objects existing in the virtual viewpoint image obtained by the virtual camera 501. By selecting a plurality of objects for setting another viewpoint, a plurality of viewpoints can be set simultaneously.

Such a configuration has been described above: the input viewpoint 211 is set in response to the content creator, and another viewpoint is set behind the player near player a. However, the other viewpoint setting method is not limited thereto. As shown in fig. 9, the other viewpoint 212c may be arranged in the lateral direction of the players a and B so as to photograph both the players a and B within the angle of view, that is, photograph both the players a and B with the field of view of the other viewpoint 212 c. In fig. 9, the middle of a line segment 901 connecting the position coordinates of players a and B (e.g., the midpoint (x7, y7, z7)) is set as a gazing point 206c, and another viewpoint 212c for the virtual camera 504 is set on a line perpendicular to the line segment 901 at the gazing point 206 c. The distance and the angle of view from the other viewpoint 212c to the gazing point 206c are set so that both players a and B fall within the angle of view, and the position coordinates (x6, y6, z6) of the other viewpoint 212c are determined. Note that it is also possible to fix the angle of view and set the distance between another viewpoint 212c and the point of regard 206c so that both players a and B fall within the angle of view.

The virtual visual point image captured by the virtual camera 504 arranged at the other visual point 212c is an image as shown in fig. 10A, for example. As shown in fig. 10B, by setting a large z6 in the position coordinates (x6, y6, z6) of another viewpoint 212c (virtual camera 504), an image viewed from above the field can be obtained so as to photograph the player around the player a. In addition, the other viewpoint 212c may be rotated by a predetermined angle from the x-y plane around a line segment 901 connecting the positions of the players a and B as an axis.

Note that the display control unit 105 displays the virtual visual point images of the input visual point and the other visual point generated by the virtual visual point image generation unit 104 on the display device 156. The display control unit 105 may simultaneously display a plurality of virtual visual point images so that the user can select a virtual visual point image he/she wants.

As described above, according to the respective embodiments, one input viewpoint is automatically set according to an operation of a content creator setting another viewpoint. Since a plurality of virtual viewpoints at the set timing of one virtual viewpoint are obtained according to the operation of setting one virtual viewpoint, a plurality of virtual viewpoints (and virtual viewpoint images) at the same timing can be easily created. Although the input viewpoint is set by the content creator in the description of the embodiments, it is not limited thereto, and may be set by the end user or others. Alternatively, the image generation apparatus 100 may obtain viewpoint information representing an input viewpoint from the outside and generate viewpoint information representing another viewpoint corresponding to the input viewpoint.

The image generation apparatus 100 may determine whether to set other viewpoints or the number of other viewpoints to be set, according to the input user operation, the number of objects in the photographing target area, and the occurrence timing of an event in the photographing target area. When the input viewpoint and the other viewpoint are set, the image generation apparatus 100 may display a virtual viewpoint image corresponding to the input viewpoint and a virtual viewpoint image corresponding to the other viewpoint on the display unit, or switch and display them.

Although a soccer ball is exemplified in the description of the embodiments, the present invention is not limited thereto. For example, the present invention may be applied to sports such as football, baseball, or skating, or to performances performed on stage. Although the virtual camera is set based on the positional relationship between players in the embodiments, the present invention is not limited thereto, and the virtual camera may be set in consideration of, for example, the position of a referee or a scorer.

OTHER EMBODIMENTS

The embodiments of the present invention can also be realized by a method in which software (programs) that perform the functions of the above-described embodiments are supplied to a system or an apparatus through a network or various storage media, and a computer or a Central Processing Unit (CPU), a Micro Processing Unit (MPU) of the system or the apparatus reads out and executes the methods of the programs.

While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.

25页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种虚拟现实播放系统及其同步播放方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类