Head-mounted information processing device and head-mounted display system

文档序号:1804149 发布日期:2021-11-05 浏览:2次 中文

阅读说明:本技术 头戴式信息处理装置和头戴式显示系统 (Head-mounted information processing device and head-mounted display system ) 是由 高见泽尚久 中出真弓 冈田义宪 于 2019-03-18 设计创作,主要内容包括:本发明能够容易地掌握处于与现实空间不同的现实空间中的虚拟对象的存在位置等。在头戴式信息处理装置(100)中,控制部(125)具有生成要由显示部显示的虚拟对象的虚拟对象生成处理部(155)。虚拟对象生成处理部(155)生成与用户所在的第一现实空间相关联地配置的第一虚拟对象、以及与不同于第一现实空间的第二现实空间相关联地配置的第二虚拟对象。并且,控制部(125)按照从操作输入接口(151)输入的、指示要显示第一虚拟对象和第二虚拟对象的虚拟对象显示指示信息,使第一虚拟对象和第二虚拟对象显示于显示器(122)。(The present invention can easily grasp the position of a virtual object existing in a real space different from the real space. In a head-mounted information processing device (100), a control unit (125) has a virtual object generation processing unit (155) that generates a virtual object to be displayed by a display unit. A virtual object generation processing unit (155) generates a first virtual object arranged in association with a first real space in which a user is located, and a second virtual object arranged in association with a second real space different from the first real space. The control unit (125) causes the display (122) to display the first virtual object and the second virtual object in accordance with virtual object display instruction information that is input from the operation input interface (151) and that instructs the display of the first virtual object and the second virtual object.)

1. A head-mounted information processing apparatus characterized by comprising:

an operation input interface for inputting information;

a camera unit for taking a picture of a real space;

a display unit that displays a real image captured by the camera unit; and

a control section for controlling the display section,

the control section has a virtual object generation processing section that generates a virtual object to be displayed by the display section,

the virtual object generation processing unit generates a first virtual object arranged in association with a first real space in which a user is present and a second virtual object arranged in association with a second real space different from the first real space,

the control unit causes the display unit to display the first virtual object and the second virtual object in accordance with virtual object display instruction information that is input from the operation input interface and instructs the display unit to display the first virtual object and the second virtual object.

2. The head-mounted information processing apparatus according to claim 1, characterized in that:

the control unit causes the second virtual object to be displayed at the same position coordinates as when displayed in the second real space.

3. The head-mounted information processing apparatus according to claim 2, characterized in that:

the control unit causes the display unit to display the first virtual object or the second virtual object as a transparent image when the first virtual object and the second virtual object are displayed in an overlaid manner.

4. The head-mounted information processing apparatus according to claim 2, characterized in that:

the control unit displays the first virtual object and the second virtual object on the display unit in a staggered manner when the first virtual object and the second virtual object are displayed in a superimposed manner.

5. The head-mounted information processing apparatus according to claim 4, characterized in that:

the control unit generates a display mark indicating a display position where the first virtual object and the second virtual object are superimposed on each other, and a virtual line connecting the display mark and the first virtual object and the second virtual object, and displays the display mark and the virtual line on the display unit.

6. The head-mounted information processing apparatus according to claim 1, characterized in that:

the control unit causes the display unit to display the first virtual object and the second virtual object in a switched manner in accordance with the virtual object display instruction information input from the operation input interface.

7. The head-mounted information processing apparatus according to claim 1, characterized in that:

the control unit causes the display unit to display the first virtual object and the second virtual object in a switched manner for each predetermined time.

8. The head-mounted information processing apparatus according to claim 1, characterized in that:

the control unit causes the display unit to display the live-view image of the first real space captured by the camera unit when the first virtual object and the second virtual object are displayed.

9. The head-mounted information processing apparatus according to claim 1, characterized in that:

the control section has a virtual object posture operation processing section capable of operating a posture of the first virtual object or the second virtual object selected via the operation input interface,

the virtual object posture operation processing unit operates the posture of the first virtual object or the second virtual object selected by the virtual object posture operation processing unit.

10. The head-mounted information processing apparatus according to claim 1, characterized in that:

the control section has a virtual object transformation operation processing section capable of performing a transformation operation of transforming the first virtual object or the second virtual object selected via the operation input interface,

the virtual object transformation operation processing unit performs a transformation operation on the first virtual object or the second virtual object selected by the virtual object transformation operation processing unit.

11. The head-mounted information processing apparatus according to claim 1, characterized in that:

the control unit generates a first display screen on which a first virtual object arranged in association with the first real space is displayed and a second display screen on which a second virtual object arranged in association with the second real space is displayed, and displays the generated first display screen and second display screen in an aligned manner on the display unit.

12. The head-mounted information processing apparatus according to claim 11, characterized in that:

the control unit enlarges and displays the selected screen on the display unit when any one of the first display screen and the second display screen is selected via the operation input interface.

13. A head-mounted display system, comprising:

a head-mounted information processing apparatus connected to a communication network and displaying a real-space object and a virtual object; and

a virtual object generation server device connected to the communication network,

the head-mounted information processing apparatus includes:

an operation input interface for inputting information;

a display unit that displays the virtual object; and

a control section for controlling the display section,

the virtual object generation server device includes:

a virtual object generation processing unit that generates the virtual object; and

a communication interface capable of receiving information from and transmitting information to the communication network,

the virtual object generation processing unit generates a first virtual object arranged in association with a first real space in which a user is present and a second virtual object arranged in association with a second real space different from the first real space, in accordance with virtual object display instruction information that is input from the operation input interface and instructs a virtual object to be displayed,

the communication interface transmits the first virtual object and the second virtual object generated by the virtual object generation processing section to the communication network,

the control unit causes the display unit to display the first virtual object and the second virtual object transmitted via the communication network.

Technical Field

The present invention relates to a head-mounted information processing apparatus and a head-mounted display system, and more particularly to a technique effective for grasping a position in a virtual object.

Background

In recent years, Virtual Reality (VR) technology, Augmented Reality (AR) technology, or Mixed Reality (MR) technology has been widely used.

Virtual reality is a technology that creates a virtual world similar to reality and can experience a feeling as if it were present. Augmented reality is a technique of adding digital information to the real world and reflecting and expanding a virtual space (virtual object) created with CG (Computer Graphics) or the like in the real space. Mixed reality is a technique of combining and fusing information of a real world and a virtual world created manually by CG or the like.

As a tool for embodying these technologies, a head-mounted information processing apparatus having a display, a camera, and the like, which is worn on the head, is widely used. In order to increase the sense of reality of a virtual object, a head-mounted information processing device associates the virtual object with the spatial coordinates of the real space and represents the virtual object as if a display method in which a real object exists is being put into practical use.

Such a display method has a problem that if the user can go to a real space in which the target virtual object is placed in association with the target virtual object, the user can view the target virtual object and can realize an intuitive operation system, but if the user cannot go to the real space in which the target virtual object is placed, the user cannot view and operate the target virtual object.

As a technique for solving this problem, there is a technique in which even if the wearer moves around in the real world space, at least a part of the augmented reality object remains in the real world space and can be easily accessed (for example, see patent document 1).

Documents of the prior art

Patent document

Patent document 1: japanese Kohyo publication 2018-505472

Disclosure of Invention

Technical problem to be solved by the invention

The technique of patent document 1 describes that a virtual object is displayed while remaining in the field of view with respect to the movement of the user in the field of view, but no consideration is given to the display of a virtual object in a separate real space. Therefore, there is a problem that it is difficult to view and operate a virtual object in another real space with good usability.

The purpose of the present invention is to provide a technique that enables the position of a virtual object in a real space different from the real space to be easily grasped.

The above and other objects and novel features of the present invention will be apparent from the description of the present specification and the accompanying drawings.

Means for solving the problems

The invention disclosed in the present application will be briefly described below in a summary of a representative embodiment.

That is, a typical head-mounted information processing apparatus includes an operation input interface, a camera section, a display section, and a control section. The operation input interface inputs information. The camera portion photographs a real space. The display unit displays a live image captured by the camera unit. The control unit controls the display unit.

The control unit further includes a virtual object generation processing unit that generates a virtual object to be displayed on the display unit. The virtual object generation processing unit generates a first virtual object arranged in association with a first real space in which a user is present and a second virtual object arranged in association with a second real space different from the first real space.

The control unit causes the display unit to display the first virtual object and the second virtual object in accordance with virtual object display instruction information that is input from the operation input interface and instructs the display of the first virtual object and the second virtual object.

Effects of the invention

The effects obtained by the representative embodiments in the invention disclosed in the present application will be briefly described below.

Since the arrangement location of the virtual object arranged in a different real space can be reliably viewed, convenience can be improved.

Drawings

Fig. 1 is a block diagram showing an example of the configuration of a head-mounted information processing apparatus according to embodiment 1.

Fig. 2 is an explanatory diagram showing an example of the surrounding view of the usage state of the head-mounted information processing apparatus of fig. 1.

Fig. 3 is an explanatory diagram showing an example of a list display of virtual object groups in the head-mounted information processing apparatus of fig. 1.

Fig. 4 is an explanatory view showing another example of the list display of the virtual object groups in fig. 3.

Fig. 5 is an explanatory view showing another example of the list display of the virtual object groups in fig. 4.

Fig. 6 is an explanatory diagram showing an example of a usage state in the head-mounted information processing apparatus of fig. 1.

Fig. 7 is an explanatory diagram showing an example of a display screen of a virtual object group displayed in a list in the surrounding panorama example of fig. 6.

Fig. 8 is an explanatory diagram showing another example of the usage state of fig. 6.

Fig. 9 is an explanatory diagram showing an example of a display screen of a virtual object group displayed in a list in the surrounding panorama example of fig. 8.

Fig. 10 is an explanatory diagram showing an example of display of a virtual object group by the head-mounted information processing apparatus of fig. 1.

Fig. 11 is an explanatory diagram showing another example of the display of the virtual object group of fig. 10.

Fig. 12 is an explanatory diagram showing an example of switching display of a virtual object group in the head-mounted information processing apparatus of fig. 1.

Fig. 13 is an explanatory diagram showing an example of the enlargement and reduction of the virtual object and the posture operation by the head-mounted information processing apparatus of fig. 1.

Fig. 14 is an explanatory diagram showing another example of fig. 13.

Fig. 15 is an explanatory diagram showing an example of a surrounding panorama when all virtual objects in a plurality of real spaces are viewed.

Fig. 16 is an explanatory diagram illustrating an example of display of a virtual object in a state viewed from a direction opposite to the entrance and exit at the rear of fig. 15.

Fig. 17 is an explanatory diagram showing another example of the display of the virtual object in fig. 16.

Fig. 18 is an explanatory view showing another example of the list display of the virtual object groups in fig. 16.

Fig. 19 is an explanatory diagram showing an example of a multiple display screen of the head-mounted information processing apparatus of fig. 1.

Fig. 20 is an explanatory diagram showing another display example of fig. 19.

Fig. 21 is a block diagram showing an example of the configuration of a head-mounted display system according to embodiment 2.

Detailed Description

In all the drawings for explaining the embodiments, the same components are denoted by the same reference numerals in principle, and redundant explanations thereof are omitted.

(embodiment mode 1)

The embodiments are explained in detail below.

< example of configuration of head-mounted information processing apparatus >

Fig. 1 is a block diagram showing an example of the configuration of a head-mounted information processing apparatus according to embodiment 1.

As shown in fig. 1, the head-mounted information processing device 100 is configured from a camera unit 111, a right eye sight line detection unit 112, a left eye sight line detection unit 113, a vibration generation unit 117, an ambient sound microphone 118, a human voice microphone 119, earphones 120, an operation input interface 121, a display 122, a control unit 125, a memory 124, a depth sensor 142, an acceleration sensor 143, a gyroscope 144, a geomagnetic sensor 145, and a stimulus generation unit 146. These functional modules are connected to each other via a bus bar 140.

The camera section 111 captures a landscape in front of the user. The display 122 as a display unit displays a real image in real space captured by the camera unit 111. The camera unit 111 may be constituted by a plurality of cameras, or 1 or a 360-degree omni-directional camera (omni-directional camera) capable of capturing an image in all directions by combining a plurality of cameras.

The control unit 125 controls each functional block by executing a program 126, which will be described later, stored in the memory 124, and controls the overall operation of the head-mounted information processing apparatus 100.

The control unit 125 includes a display control unit 151, a data management unit 152, a video processing unit 153, a virtual object posture operation processing unit 154, a virtual object generation processing unit 155, and a virtual object transformation operation processing unit 156.

The virtual object generation processing unit 155 generates a virtual object group including at least 1 virtual object in a virtual space different from the real space. The virtual object generation processing unit 155 arranges the generated virtual object group in association with the real space.

Here, the virtual object group arranged in association with the first real space viewed or displayed on the display 122 is set as the first virtual object group. In addition, a virtual object group arranged in association with a second real space that is a different real space from the first real space is set as a second virtual object group.

Similarly, a virtual object group arranged in association with a third real space that is a real space different from the first and second real spaces is set as a third virtual object group. A virtual object group arranged in association with a fourth real space that is a real space different from the first to third real spaces is set as a fourth virtual object group.

The virtual object generation processing unit 155 generates a virtual object based on the prototype data of the virtual object read from the memory 124 in accordance with the user operation input from the operation input interface 121.

Further, the prototype data of the virtual object is not necessarily required, and the direct virtual object data may be generated by a user operation without requiring a prototype. For example, when a rectangular parallelepiped virtual object is generated, 8 points to be vertices of the virtual object are specified in real space by a user operation using the operation input interface 121.

The virtual object posture operation processing section 154 rotates, enlarges, and reduces the virtual object displayed on the display 122 to operate in a posture forming an easy-to-see shape. This is referred to as a gesturing operation. As a result of the gesture operation, the original virtual object's gesture, shape, orientation, and the like are not reflected.

The virtual object transformation operation processing unit 156 performs a transformation operation on the virtual object displayed on the display 122. The morphing operation is, for example, changing the direction of the virtual object, changing the size, changing the shape, deleting a part, deleting the entirety, or the like. The structure of the deforming operation by the virtual object deforming operation processing unit 156 is such that the original posture, shape, direction, and the like of the virtual object are not reflected.

The image processing unit 153 processes the image data captured by the camera unit 111, and stores the processed image data as the information data 127 in the memory 124. The image processing unit 153 simplifies the image data captured by the camera unit 111 mainly for the purpose of reducing the amount of display data and improving the visibility. For example, when a quadrangular bookshelf exists in the video data, the video data is simplified by simplifying the outer shape into a shape such as a rectangular parallelepiped of the same size.

The image processing unit 153 performs image processing for facilitating recognition of each space. For example, based on the image data of the line of sight from the user captured by the camera unit 111, image data of a plan view looking down at the space where the user is located at the time of capturing is generated.

The display control unit 151 generates display data by appropriately combining the information data 127 stored in the memory 124, and displays the display data on the display 122. In this case, the information data 127 includes virtual object data, video data captured by the camera unit 111, and processed display data generated by the video processing unit 153.

The data management unit 152 manages real-time image data captured by the camera unit 111, data of a virtual object, processed display data generated by the image processing unit 153, and the like.

The control Unit 125 is configured by a CPU (Central Processing Unit) or the like including a dedicated processor for each arithmetic Processing such as a GPU (Graphics Processing Unit), and controls the operation of the entire head-mounted information Processing apparatus 100 by executing the program 126 stored in the memory 124 to control each functional module. The program 126 is a program such as an OS (Operating System) of the head-mounted information processing device 100 and an operation control application.

The control unit 125 controls the display control unit 151 so that a virtual object group arranged in a physical space different from the first physical space, for example, a second virtual object group arranged in association with the second physical space or the like is arranged and displayed in the first physical space in accordance with the virtual object display instruction information input from the operation input interface 121. Thereby, viewing or manipulation of a virtual object group arranged in association with a real space different from the first real space can be achieved.

When the virtual object group generated by the virtual object generation processing unit 155 is displayed in the display field of view of the display 122, for example, an omnidirectional image showing the entire surrounding landscape from the head-mounted information processing device 100 is projected and reflected on the display 122, and the virtual object group is arranged at a predetermined position of the reflected omnidirectional image.

In this case, the control unit 125 controls the display control unit 151 to arrange and display a first virtual object group and a virtual object group arranged in association with a real space different from the first real space, for example, a second virtual object group arranged in association with the second real space or the like, in the first real space.

The control unit 125 may sequentially switch between the first virtual object group and the second virtual object group arranged in association with the second real space, and the like, and display the switched objects on the display screen of the display 122.

The control unit 125 may be configured to display the first virtual object group, the virtual object group arranged in a physical space different from the first physical space, and the like in a manner of being arranged in a display screen of the display in a lump, in accordance with the visual field position of the first physical space.

The control unit 125 may reduce the display screens of the virtual object groups arranged in association with the real spaces to perform multiple display, and display the display screens of the virtual object groups of the selected real spaces in a normal size so that the desired virtual objects arranged in the selected real spaces can be viewed and manipulated.

Alternatively, the control unit 125 controls the virtual object posture operation processing unit 154 to enlarge or reduce or manipulate the virtual object so as to make the entire shape of the virtual object easily recognizable, and displays the virtual object after the posture manipulation, on the virtual object disposed in the display field of view, using the display control unit 151.

The memory 124 is a nonvolatile memory such as a flash memory, and stores various programs 126 and information data 127 used by the control unit 125. The information data 127 is data of a virtual object group, coordinate position information of the virtual object group, data of a live image, and the like.

The display 122 is configured by a liquid crystal panel or the like, and displays a virtual object, a live image in real space, and the like. The display 122 displays a screen showing display contents such as notification information and an operation state to the user.

For example, when displaying a live image and a virtual object captured by the camera unit 111, the virtual object is arranged and displayed at a predetermined position on an omnidirectional image showing the entire surrounding landscape of the head-mounted information processing device 100. Further, the display 122 displays each virtual object group associated with a plurality of real spaces in multiple on the display screen.

The right-eye line-of-sight detecting section 112 detects the line of sight of the right eye of the user. The left-eye line-of-sight detecting unit 113 detects the line of sight of the left eye of the user. In the process of detecting the line of sight, a commonly used known technique can be used as the eye tracking process.

For example, in a method using corneal reflection, there is known a technique in which an infrared camera is used to photograph a face by irradiating an infrared LED (Light Emitting Diode) to the face, and the position on the cornea of reflected Light formed by irradiation of the infrared LED (corneal reflection) is used as a reference point to detect a line of sight based on the position of the pupil relative to the position of corneal reflection.

The acceleration sensor 143 is a sensor that detects acceleration, which is a change in speed per unit time, and can recognize motion, vibration, impact, and the like. The gyroscope 144 is a sensor that detects an angular velocity in the rotation direction, and can grasp the state of the vertical, horizontal, and oblique postures. Thus, the movement of the head of the user wearing the head-mounted information processing device main body 100 can be detected using the acceleration sensor 143 and the gyroscope 144.

The geomagnetic sensor 145 is a sensor that detects the magnetic force of the earth, and detects the direction in which the head-mounted information processing apparatus main body 100 is facing. The geomagnetic sensor 145 is a 3-axis geomagnetic sensor that detects not only the front-back direction and the left-right direction but also the up-down direction, and can detect the movement of the head by detecting a change in the geomagnetism with respect to the movement of the head.

These sensors can detect the movement and fluctuation of the head-mounted information processing apparatus 100 worn by the user in detail.

The depth sensor 142 uses a surface to measure the distance to the object. The depth sensor 142 may be implemented by, for example, a sensor using reflection of infrared rays, laser light, or the like, or another method of obtaining distance information or the like from parallax of captured images by a plurality of cameras at different mounting positions.

The control unit 125 can detect the movement of the hand and the movement of the body by analyzing the distance information acquired by the depth sensor 142. When analyzing the hand movement and the body movement, information obtained from the image captured by the camera unit 111 may be used together.

The stimulus generating unit 146 generates a stimulus that can be perceived by the skin under the control of the control unit 125. The stimulus generating unit 146 converts notification information to be sent to the user by the head-mounted information processing device 100 into a stimulus that can be perceived by the skin.

The stimulus that can be sensed by the skin is pressure, heat, cold, or electrical stimulation. The stimulus generating unit 146 can reliably transmit a notification to the user by generating a stimulus that can be perceived by the skin of the head of the user who is closely wearing the user.

The vibration generating unit 117 generates vibration under the control of the control unit 125, and includes, for example, a vibrator, a virtual haptic device, a force feedback device, and the like. The vibration generation section 117 converts notification information to the user into vibration. The vibration generating unit 117 generates vibration on the head of the user wearing the device, thereby reliably transmitting a notification to the user.

The ambient sound microphone 118 and the human voice microphone 119 collect sounds from the outside and the user's own voice production. The human voice microphone 119 may be a sound input device such as a bone conduction microphone.

The earphone 120 is worn on the ear of the user, and is a device that allows the user to hear sound, and can transmit notification information by sound like the user. The earphone 120 may be a speaker or a sound output device of a bone conduction earphone or the like.

The operation input interface 121 is configured by, for example, a keyboard, buttons, a touch panel, or the like, and sets and inputs information to be input by the user. The operation input interface 121 may be provided at a position where a user can easily perform an input operation.

The operation input interface 121 may be separated from the main body of the head-mounted information processing apparatus 100 and connected thereto by wire or wirelessly. Examples of the input operation device separated from the head-mounted information processing device 100 include a three-dimensional mouse, a control device, and the like.

The three-dimensional mouse is a 3-dimensional spatial position input device using a gyroscope, an acceleration sensor, or the like. The control device detects and inputs the spatial position of the controller itself worn on the body based on the camera image showing the body, various sensor information built in the control device, and the like.

The operation input interface 121 displays an input operation screen on the display screen of the display 122, and acquires input operation information based on the position on the input operation screen of the direction of the line of sight detected by the right-eye line-of-sight detecting unit 112 and the left-eye line-of-sight detecting unit 113.

The operation input interface 121 may display a pointer on the input operation screen, and acquire input operation information by operating the pointer through the operation input interface 121. The operation input interface 121 may be configured to allow the user to generate a voice representing an input operation and acquire input operation information by picking up the voice with the human voice microphone 119.

By using sound generation and display in the input operation in this way, the usability of the head-mounted information processing apparatus worn on the head can be further improved.

With the above configuration, according to the virtual object display request instruction for instructing the display of the virtual object, which is input through the operation input interface 121, the virtual object group arranged in association with the real space different from the first real space, for example, the second virtual object group arranged in association with the second real space, can be displayed in the first real space in a superimposed manner, or can be displayed in a switched manner.

In addition, by displaying all the virtual object groups including the virtual object group in the different real space in the visual field position of the first real space, viewing and operating of the virtual object in the different real space can be easily performed. In addition, when a large number of virtual object groups exist, it is possible to eliminate the situation where it is not known in which real space the target virtual object exists.

In addition, it is possible to operate a virtual object disposed in another real space without moving the real space being viewed. For example, in a case where a calendar as a virtual object is placed on a wall of the real space a and a reservation is confirmed or written in another real space B, the user can read and operate the calendar as a virtual object without moving to the real space a.

This example is a calendar morphing operation as a virtual object, and is processed by the virtual object morphing operation processing unit 156. As a result of the morphing operation performed by the virtual object morphing operation processing unit 156, it is possible to reflect the original object, and for example, when the book Z is written as a morphing operation from the real space B to the calendar of the wall in the real space a, the written book Z can be viewed even when the calendar of the wall is viewed in the real space a.

< example of operation of head-mounted information processing apparatus >

Next, the operation of the head-mounted information processing apparatus 100 will be described.

Fig. 2 is an explanatory diagram showing an example of the surrounding view of the usage state of the head-mounted information processing apparatus 100 in fig. 1. Fig. 3 is an explanatory diagram showing an example of a list display of virtual object groups by the head-mounted information processing apparatus 100 of fig. 1. Fig. 3 shows an example of display in the case where the virtual object groups listed in the usage situation shown in fig. 2 are arranged so as to be accommodated in the display screen of the display 122.

In fig. 2, a user 200 wearing the head mounted information processing apparatus 100 is located at the center of a first room 201 and looks in a direction 203 opposite to a rear entrance door 202.

A desk 204 and a computer 205 are placed on the front side of the front of the user 200, and a bookshelf 206 is placed on the back side of the user 200. The virtual objects 211 to 213 are a first virtual object group and are generated by the virtual object generation processing unit 155.

The virtual object 211 is arranged on the front of the user 200. Virtual object 212 is disposed on the right side of table 204. The virtual object 213 is arranged on the right side of the bookshelf 206 behind the user.

In the panoramic state of the surroundings of the first room 201 shown in fig. 2, the user 200 directly views the first real space reflected in the direction 203 of the first room 201 or displays the real image captured by the camera unit 111 on the display 122.

As for the viewing of the first virtual object group, as shown in FIG. 3, the respective virtual objects 211 to 213 are displayed in a list. The list display projects and reflects the omnidirectional image captured by the camera unit 111 on the display screen of the display 122, and displays all virtual objects in the omnidirectional image projected and reflected. In this case, the virtual objects 211 to 213 are arranged at predetermined positions.

The display control unit 151 displays the virtual objects 211 to 213 on the display 122 based on the data read by the data management unit 152. The data management unit 152 reads the shape data and the arrangement coordinate data of the virtual objects 211 to 213 recorded as the information data 127 in the memory 124 and outputs the shape data and the arrangement coordinate data to the display control unit 151. At this time, the virtual object posture operation processing unit 154 performs a posture operation of the virtual object as necessary.

This makes it possible to view all virtual objects existing in the surrounding panorama together with the existing position.

The portion indicated by the broken line in fig. 3 is a real image object, and is a diagram for easily indicating the positional relationship between the real image object and the virtual object. Therefore, the live-action object indicated by the broken line may not be displayed.

Fig. 4 is an explanatory view showing another example of the list display of the virtual object groups in fig. 3. Fig. 4 shows an example in which only the virtual objects 211 to 213 are displayed without displaying the real object indicated by the broken line.

Fig. 5 is an explanatory view showing another example of the list display of the virtual object groups in fig. 4. When displaying all the virtual objects existing in the surrounding panorama, as shown in fig. 5, live-action objects such as a table 204, a computer 205, and a bookshelf 206 indicated by the broken lines in fig. 3 may be displayed as background images. This makes it easier to recognize the positional relationship between the table 204, the computer 205, the bookshelf 206 and the virtual objects 211 to 213 in real space.

For example, in the case where the virtual object 213 is located on the upper right side of the bookshelf 206 on the back side as shown in fig. 2, the bookshelf 206 is displayed as a background image on the lower left side of the virtual object 213, and the virtual object position can be easily recognized.

As described above, the display control unit 151 generates display data such as a virtual object, and displays the data on the display 122. The display control unit 151 reads data such as the shape and display position of the object included in the information data 127, the background image captured by the camera unit 111, data generated by the image processing unit 153, and the like from the memory 124, and generates display data from the read data.

The virtual object posture operation processing unit 154 performs a posture operation of the virtual object as necessary, and adjusts the display position, size, shape, and the like so that the virtual object is displayed at a corresponding position on the screen. This adjustment is performed by the virtual object posture operation processing section 154 based on the command of the program 126 recorded in the memory 124.

Further, an omnidirectional image representing the entire surrounding landscape may be acquired by using a full solid angle omnidirectional camera capable of capturing an omnidirectional image at a time. Alternatively, a plurality of images captured by a camera with a normal view angle may be synthesized by the video processor 153 to generate the synthesized images.

In addition, the landscape image may be displayed using only a partial range of images that can be acquired. For example, a partial range image, such as a hemispherical image of the upper half of the full solid angle, rather than an entire image, can be used to view the existence of substantially all virtual objects.

Fig. 6 is an explanatory diagram showing an example of a usage state in the head-mounted information processing apparatus 100 of fig. 1. Fig. 7 is an explanatory diagram showing an example of a display screen of a virtual object group displayed in a list in the surrounding panorama example of fig. 6.

Fig. 6 shows a state in which the user 200 wearing the head-mounted information processing device 100 is positioned near the entrance door 402 of the second room 401 and looks in the indoor direction 403 of the second room 401. Further, a television cabinet 404 and a television 405 are provided on the front side of the front of the user 200. The wall on the right side from the user 200 is provided with a shelf 406.

The second virtual object group generated by the virtual object generation processing unit 155 is composed of virtual objects 411 to 413. In fig. 7, the virtual object 411 is located on the rear upper side of the television 405. Virtual object 412 is located on the rear right side of television 405. The virtual object 413 is located near the left side wall of the entrance/exit door 402.

In the panoramic view around the second room 401 shown in fig. 6, the user 200 views the second real space in the indoor direction 403 directly or displays an image of the real space captured by the camera unit 111 on the display 122.

As shown in fig. 7, the viewing of the second virtual object group is performed by projecting and reflecting an omnidirectional image representing the entire surrounding landscape on the display screen of the display 122, and arranging and displaying the virtual objects 411 to 413 at predetermined positions in the projected and reflected omnidirectional image in a list.

Thus, as in the case of fig. 3, the presence of all virtual objects existing in the surrounding panorama can be viewed together with the presence position. In the case of displaying all the virtual objects existing in the entire surrounding panorama, live objects such as the tv cabinet 404, the tv set 405, and the stand 406 shown by the broken lines in fig. 7 may be displayed as background images.

This makes it possible to easily recognize the positional relationship between the real space and the virtual object. Note that the display processing in the display 122 is similar to that in fig. 3, and therefore, the description thereof is omitted.

Fig. 8 is an explanatory diagram showing another example of the usage state of fig. 6. Fig. 9 is an explanatory diagram showing an example of a display screen of a virtual object group displayed in a list in the surrounding panorama example of fig. 8. Fig. 8 shows a state in which the user 200 wearing the head-mounted information processing device 100 is positioned at the center in the third room 601 and sees the entrance/exit door 602 in the back left direction 603. The panel 604 is positioned on the front of the user 200 and the window 605 is positioned on the right side of the user 200. Further, a clock 606 is disposed on the right side of the window 605.

The third virtual object group generated by the virtual object generation processing unit 155 is composed of virtual objects 611 to 613. In fig. 8, a virtual object 611 is located on the left side of the board 604, and a virtual object 612 is located above the window 605. Virtual object 613 is located behind user 200.

In the panoramic state of the surroundings in the third room 601 shown in fig. 8, the user 200 views the third real space displayed in the direction 603 directly or displays the image of the real space captured by the camera unit 111 on the display 122.

As shown in fig. 9, the third virtual object group is viewed by projecting and reflecting an omnidirectional image representing the entire surrounding landscape on the display screen of the display 122, arranging the virtual objects 611 to 613 at positions where the projected and reflected omnidirectional image exists, and displaying all the virtual objects 611 to 613 in a list.

Thus, as in the case of fig. 3 and 7, the presence of all virtual objects present in the surrounding panorama can be viewed together with the presence position. In displaying all the virtual objects existing in the entire surrounding panorama, a live object such as a board 604, a window 605, and a clock 606 shown by a dotted line in fig. 9 may be displayed as a background image.

This makes it possible to easily recognize the positional relationship between the real space and the virtual object. Note that the display processing in the display 122 is similar to that in fig. 3, and therefore, the description thereof is omitted.

Fig. 10 is an explanatory diagram showing an example of display of a virtual object group by the head-mounted information processing apparatus of fig. 1. The examples shown in fig. 3, 7, and 9 are examples in which the virtual object groups arranged in association with the respective real spaces are viewed in the respective real spaces. However, in these examples, the virtual object group arranged in association with a real space different from the real space currently reflected cannot be viewed in the real space currently reflected.

Fig. 10 illustrates a display example in which virtual object groups arranged in association with real spaces are arranged on a display screen of the display 122 so as to be superimposed, with the visual field positions of the real spaces kept unchanged.

In fig. 10, the portions denoted by the same reference numerals as those shown in fig. 3, 7, and 9 have the same operations as those already described in fig. 3, 7, and 9, and therefore, detailed description thereof will be omitted.

Fig. 10 shows a state in which, in the first real space currently reflected, a second virtual object group arranged in association with the second real space and a third virtual object group arranged in association with the third real space are displayed superimposed on the display screen of the display 122 in addition to the first virtual object group arranged in association with the first real space.

The first virtual object group is composed of virtual objects 211, 212, 213. The second virtual object group is composed of virtual objects 411, 412, 413. The third virtual object group is composed of virtual objects 611, 612, 613.

As shown in fig. 10, virtual objects 411 to 413 and 611 to 613 placed in another real space are placed and displayed in the current real space in a superimposed manner, and thus all virtual objects placed in association with the other real space can be viewed without switching the real space.

Thus, even if there are a large number of real space and virtual object groups, a desired virtual object can be easily viewed from a display screen on which all virtual objects are displayed. In addition, it is possible to easily perform a desired operation such as correction of the selected virtual object. As a result, the usability can be improved.

The above operation is an operation in which the display controller 151 displays on the display 122. The virtual object posture operation processing unit 154 performs a posture operation of the virtual object as necessary.

The virtual object posture operation processing unit 154 adjusts the display position, size, shape, and the like so that the virtual object is displayed at a corresponding position on the screen in accordance with the command of the program 126 stored in the memory 124. The display control unit 151 generates display data from the data adjusted by the virtual object posture operation processing unit 154, and displays the display data on the display 122.

Fig. 11 is an explanatory diagram showing another example of the display of the virtual object group of fig. 10. Fig. 11 shows an example in which virtual objects displayed on the display screen of the display 122 are arranged at substantially the same coordinate position, and the virtual objects are displayed in a superimposed manner.

In fig. 11, the same reference numerals as those shown in fig. 2, 3, 6, 7, 8, 9, and 10 are given to the same parts, and since the same operations as those already described in fig. 2, 3, 6, 7, 8, 9, and 10 are given, the detailed description thereof will be omitted.

The virtual objects 212 and 412 shown in fig. 10 are virtual objects arranged in almost the same coordinate positions in a superimposed manner, and visibility is reduced due to the superimposition. Therefore, the virtual objects 212 and 412 displayed in an overlapping manner are displayed in non-overlapping coordinate positions with the virtual objects 901 and 902 shifted from each other as shown in fig. 11.

Further, a mark 903 is displayed at the original coordinate position where the virtual objects 212 and 412 are arranged, and virtual lines 904 and 905 are used to continuously display the marks 903 and the virtual objects 901 and 902 arranged and displayed with the coordinate positions shifted from each other.

In this way, virtual objects that are arranged and displayed in almost the same position with overlapping are displayed with a shift, and thus virtual objects can be displayed without overlapping. Further, by displaying the mark 903, the position where the virtual object is originally arranged can be easily recognized. Further, by displaying the virtual lines 904 and 905, the visibility of the virtual object can be further improved.

The virtual object may be displayed as a translucent image, that is, a perspective image, in which the virtual object is displayed in a superimposed manner at the same position without shifting the display position.

The above operation is an operation of the display control unit 151 displaying on the display 122. The virtual object posture operation processing unit 154 performs a posture operation of the virtual object as necessary.

The virtual object posture operation processing unit 154 performs posture operation to adjust the display position, size, shape, and the like based on the program 126 stored in the memory 124 so that the virtual object is displayed at the corresponding position.

Thereafter, when not specifically described, the same processing is performed for displaying the virtual object on the display 122.

The virtual object group may be displayed by selecting a desired virtual object from the virtual objects 212 and 412 (fig. 10) displayed in an overlapping manner by the user and displaying only the associated virtual object group in the real space arranged in association with the selected virtual object. In this case, the virtual object group arranged in association with the other virtual space is not displayed.

This makes it possible to easily select a desired virtual object by selecting a next desired virtual object in one step from among a group of virtual objects arranged in the same real space as the desired virtual object.

< example of switching display of virtual object group >

Fig. 12 is an explanatory diagram showing an example of switching display of a virtual object group in the head-mounted information processing apparatus 100 of fig. 1. In fig. 12, the same reference numerals as those shown in fig. 2, 3, 6, 7, 8, 9 and 10 are given to the same portions, and the same operations as those already described in fig. 2, 3, 6, 7, 8, 9 and 10 are given to the same portions, and therefore, the detailed description thereof will be omitted.

Fig. 12 shows an example in which, instead of displaying all virtual objects superimposed as shown in fig. 11, virtual object groups arranged in association with respective real spaces are sequentially switched and displayed.

Display screen 1001 is a display screen representing a first virtual object group initially arranged in association with a first real space. The display screen 1002 is a display screen representing a second virtual object group arranged in association with the second real space.

The display screen 1003 is a display screen representing a third virtual object group arranged in association with the third real space. The display screen 1004 represents a display screen displayed when a virtual object group arranged in a different real space different from the first to third real spaces exists. These display screens 1001 to 1004 are sequentially displayed on the display 122 in a switched manner.

This makes it possible to sequentially view only the virtual object groups arranged in association with the respective real spaces, instead of viewing all the virtual object groups at once. As a result, a desired virtual object can be efficiently viewed from among the respective virtual object groups, and the visibility can be further improved.

The switching display of the virtual object group is performed at regular intervals for easy viewing, for example, by an input operation such as a slide operation from the operation input interface 121. This can further improve the visibility of the virtual object.

In the case where the user wants to view the virtual object in more detail or, conversely, to switch to the next screen in a shorter time in the switching display of the virtual object for each fixed time, the increase or decrease of the viewing time of the virtual object group to be viewed can be changed by the operation input through the operation input interface 121.

When the display screens 1001, 1002, and 1003 are displayed, not only the virtual object group but also a live image in real space corresponding to the virtual object group may be displayed as a background.

In this case, the virtual object can be easily recognized by displaying the background of the live image. The real imaging background may be imaged by the camera unit 111 of fig. 1 and stored in the memory 124 in advance. When the angle of view of the camera unit 111 is narrow, the images taken separately may be combined and used.

< zoom-in/zoom-out and gesture operation of virtual object >

Next, operations of enlarging and reducing the virtual object and performing the posture operation by the head-mounted information processing apparatus 100 will be described. Fig. 13 is an explanatory diagram showing an example of the enlargement and reduction of the virtual object and the posture operation performed by the head-mounted information processing apparatus of fig. 1. Fig. 14 is an explanatory diagram showing another example of fig. 13.

In fig. 13 and 14, the same reference numerals as those shown in fig. 2, 3, 6, 7, 8, 9, and 10 are given to the same portions as those shown in fig. 2, 3, 6, 7, 8, 9, and 10, and the same operations as those already described in fig. 2, 3, 6, 7, 8, 9, and 10 are given to the same portions, and therefore, detailed description thereof will be omitted.

Fig. 13 shows an example in which a small virtual object 612 that is difficult to view is selected from among the virtual objects 211 to 213, 411 to 413, and 611 to 613 displayed in a list, and the viewability of the selected virtual object 612 is improved.

In this case, the virtual object posture operation processing unit 154 performs an operation of enlarging the shape of the virtual object 612, and moves the enlarged virtual object to a predetermined position in front of the enlarged virtual object, and arranges the enlarged virtual object and displays the enlarged virtual object as the virtual object 1101. In this case, the list of all the virtual objects 211 to 213, 411 to 413, 611 to 613 is displayed in the background portion of the display screen.

Alternatively, the virtual object posture operation processing unit 154 may move the virtual object 612 to the near side without leaving it, and perform the enlargement operation. The predetermined position is set with a position easy to view as an initial value. The initial value is stored in the memory 124 in advance, for example.

The initial value can be set by the control unit 125 writing setting information input by the user via the operation input interface 121 into the memory 124. For example, by setting the movement range portion of the hand on the front side of the body as an initial value which is a predetermined position, the virtual object can be easily viewed, and the posture operation and the deforming operation can be easily performed.

The operation of moving the virtual object to a predetermined position may be performed automatically by selecting an arbitrary virtual object by the user using the operation input interface 121. Alternatively, the configuration may be manually performed by a natural operation of pulling the selected object closer. In addition, at the time of the arrangement, the user may perform an operation of determining the magnification using the operation input interface 121, or may change the magnification.

In the above operation, the control unit 125 controls the virtual object posture operation processing unit 154 and the like in accordance with the user operation input from the operation input interface 121. The virtual object posture operation processing unit 154 changes the information of the shape and the display position of the selected object.

The display control unit 151 reads information data 127 such as the shape and display position of the object stored in the memory 124 and displays the information data on the display 122.

This makes it possible to more clearly view a small virtual object that is difficult to view in the list display. The selected virtual object is restored to the original position of the background portion in the display screen before the arrangement operation by the control of the control unit 125 after the viewing confirmation. This action can be automatically performed after the viewing confirmation completion operation.

Next, when another virtual object is selected and placed in front of the selected virtual object, the previously selected virtual object is restored to the original position, and the list display image of the original virtual object is left as the background. This makes it possible to facilitate the placement operation and viewing confirmation of the next virtual object.

In the display in which the virtual object is arranged in the omnidirectional image, although the presence of all the virtual object groups can be viewed, it may be difficult to view the entire shape of the virtual object.

Therefore, a virtual object that is arranged in the display screen of the display 122 and is difficult to view is displayed by manipulating the posture of the virtual object by the virtual object posture manipulation processing unit 154 so that the entire shape of the virtual object is easy to view.

For example, the virtual object 411 shown in fig. 14 is originally in the shape of a cube, and is in the display shape of the omnidirectional image, the display shape is invisible and is determined to be a cube. The virtual object posture operation processing unit 154 moves the virtual object 411 to a display position in front of which the posture operation is easy to be performed while first enlarging it.

Then, the virtual object posture operation processing unit 154 performs a posture operation of rotating the moved virtual object 411 including a three-dimensional rotation operation to a display shape whose overall shape is easy to see, thereby changing the display shape to the display shape indicated by the virtual object 1201 and displaying the display shape.

After viewing and confirming the virtual object 1201, the virtual object 1201 may be restored to the original position before the configuration operation as the virtual object 411.

In this manner, the virtual object posture operation processing unit 154 performs a posture operation on a virtual object whose entire shape is difficult to view, and shifts the virtual object to a virtual object whose entire shape can be viewed and whose display shape is specified, thereby making it possible to accurately view and grasp the entire shape and the entire appearance of the virtual object.

The operation of displaying the display shape in which the entire shape is easily viewed may be performed without performing the posture operation by the user, and the display shape may be displayed by a display shape stored in advance in the memory 124 or the like. The display shape with an easily viewable entire shape is formed by recording information such as a direction, a size, and a color, which are easily viewable by a virtual object, in shape data, which is a model in advance when the virtual object is generated, as orientation information in the memory 124, and using the orientation information in succession for the virtual object having been generated. The user may designate posture information for each virtual object, store the posture information in the memory 124, and use the posture information for display.

< morphing operation of virtual object >

Further, the virtual object transformation operation processing unit 156 can perform a transformation operation on the virtual object. The virtual object transformation operation processing unit 156 reads the shape and display position of the virtual object stored in the memory 124, changes the information of the shape and display position of the selected virtual object, and writes the changed information in the memory 124. The shape of the virtual object includes a direction, a size, an angle, and the like.

The display control unit 151 reads the information written in the memory 124, and displays the virtual object subjected to the morphing operation on the display 122 based on the information.

The result of the transformation operation performed by the virtual object transformation operation processing unit 156 is also reflected in the original display state of the virtual object. In the case where the direction of the object is changed by the morphing operation, the virtual object is displayed superimposed on the view of the real space because the direction of the virtual object itself is changed. For example, even in the normal display state shown in fig. 2, 6, and 8, the virtual object is displayed in the orientation after the morphing operation.

In the case of the deforming operation, by displaying the shape including the direction, size, and the like of the original virtual object before the deforming operation in a semi-transparent view, or displaying the shape in a place not used for the deforming operation in the field of view, it is possible to easily understand and display the difference in the shape including the direction, size, and the like between the original virtual object before the deforming operation and the deformed virtual object.

The virtual object is designated before the operation, as to which of the posture operation and the deformation operation is performed, for example, by an operation mode switching button, not shown, provided in the head-mounted information processing apparatus 100.

The gesture operation and the morphing operation may be combined, and the morphing operation may be performed after the virtual object is enlarged by the gesture operation to be easily viewed. In addition, operations applied to rotation, enlargement, reduction, and the like of the virtual object by the gesture operation may be applied to the deformation operation.

< example of displaying virtual object group in additional real space >

Next, an operation in the case of displaying all the virtual object groups arranged in association with each other from the current real space will be described with reference to fig. 15 to 18.

In fig. 15 to 18, the same reference numerals as those shown in fig. 2, 3, 6, 7, 8, 9 and 10 are given to the same portions, and the same operations as those already described in fig. 2, 3, 6, 7, 8, 9 and 10 are given to the same portions, and therefore, the detailed description thereof will be omitted.

Fig. 15 is an explanatory diagram showing an example of a surrounding panorama when all virtual objects in a plurality of real spaces are viewed.

In fig. 15, the user 200 wearing the head mounted information processing device 100 is located at the center of the first room 201 and looks in the direction 203 opposite to the entrance door 202, as in the case of fig. 2.

In addition, a table 204, a computer 205, and a bookshelf 206 are placed in the first room 201. In the first room 201, as in fig. 2, virtual objects 211, 212, and 213 are arranged as a first virtual object group generated by the virtual object generation processing unit 155.

A second room 401 is disposed adjacent to the left side of the first room 201, and a third room 601 is disposed adjacent to the right side of the first room 201. In the second room 401, a television cabinet 404, a television 405, and a rack 406 are placed.

In the second room 401, virtual objects 411, 412, and 413 are arranged as a second virtual object group generated by the virtual object generation processing unit 155, as in fig. 6.

In the third room 601, a board 604, a window 605, and a clock 606 are arranged, and as in fig. 8, virtual objects 611, 612, and 613 are arranged as a third virtual object group generated by the virtual object generation processing unit 155.

Fig. 16 is an explanatory diagram showing an example of display of a virtual object in a state viewed from a direction 203 opposite to the entrance door 202 in the rear of fig. 15. As shown in fig. 16, objects such as house walls that partition a real space are seen through, and virtual objects 411 to 413, 611 to 613 arranged in association with another real space are displayed on a display screen.

This makes it possible to easily view all the virtual objects 211 to 213, 411 to 413, and 611 to 613 without being limited to the real space, and improve usability when selecting a desired virtual object.

The virtual object is displayed so that the real space arranged in association with the virtual object can be easily recognized according to the display position of the virtual object.

For example, in fig. 16, the virtual objects 611 to 613 are displayed on the right side of the display screen, and thus the virtual object group arranged in association with the third room 601 in fig. 15 can be easily identified.

Similarly, the virtual objects 411 to 413 are displayed on the left side of the display screen, and thus the virtual object group arranged in association with the second room 401 in fig. 15 can be easily recognized.

Fig. 17 is an explanatory diagram showing another example of the display of the virtual object in fig. 16. Here, in the display example shown in fig. 16, another room, that is, another real space is located at a longer distance, and the virtual object group configured in association with the real space is displayed smaller, and viewing may become difficult.

In this case, the virtual objects 411 to 413 are enlarged and displayed as virtual objects 1514 to 1516. Similarly, the virtual objects 611 to 613 are enlarged and displayed as virtual objects 1511 to 1513.

The virtual object posture operation processing unit 154 performs shape enlargement operations on the virtual objects 411 to 413, 611 to 613, respectively, and generates enlarged virtual objects 1511 to 1516.

This makes it possible to view a small virtual object that is difficult to view more clearly. In addition, when the shape of the virtual object is enlarged based on the virtual object posture operation processing unit 154, the size of the virtual object may be enlarged to a size that is easy to handle and easy to view.

Further, the virtual object 1511 is displayed in a display shape in which the entire shape of the virtual object 411 in fig. 16 is easily viewed, based on the above-described posture information specified in advance. A part of the virtual object or the entire virtual object may be displayed in a display shape that is easy to view in the entire shape by the user's specification.

As shown in fig. 16, by transparently displaying all virtual objects by blocking the current real space and a room wall or the like of another real space, the real space arranged in association with the selected virtual object can be easily specified.

Fig. 18 is an explanatory view showing another example of the list display of the virtual object groups in fig. 16. Fig. 18 shows an example of display when the display of fig. 10 is shifted to the display of fig. 16. The virtual objects 611a, 612a, 613a are the virtual objects shown in fig. 10. The virtual objects 611b, 612b, and 613b are virtual objects when the display example of fig. 10 is changed to the display example of fig. 16.

That is, when the display is shifted from the display of fig. 10 to the display of fig. 16, as shown in fig. 18, the virtual objects 611a, 612a, and 613a move so as to be concentrated on the right side portion of the display screen as the virtual objects 611b, 612b, and 613 b.

This makes it possible to easily recognize that these virtual objects 611b, 612b, and 613b are virtual objects arranged in association with the third real space of the third room 601 located on the right side of the screen of the first room 201.

At the time of this display transition, the user gradually transitions slowly using the speed or the like that the eyes follow, whereby it is possible to reliably view which real space the moving virtual object group is arranged in association with.

In fig. 18, an example of moving an arbitrary virtual object group is described, and all virtual objects may be moved at once. Alternatively, only the selected 1 or more virtual objects may be moved and displayed, and other virtual objects not selected may not be displayed.

Next, a case will be described in which a virtual object group arranged in association with each real space is displayed in multiple.

Fig. 19 is an explanatory diagram showing an example of a multi-display screen based on the head-mounted information processing apparatus of fig. 1. Fig. 19 shows an example in which the display screen shown in fig. 3, 7, and 9 is reduced and multiple display is performed on the display screen of the display 122.

In fig. 17, the same reference numerals as those shown in fig. 2, 3, 6, 7, 8, 9, and 10 are given to the same portions, and the same operations as those already described in fig. 2, 3, 6, 7, 8, 9, and 10 are given to the same portions, and therefore, the detailed description thereof will be omitted.

In fig. 19, a display screen 1701, a display screen 1702, a display screen 1703, and a display screen 1704 are displayed in the display screen of the display 122 in this order from the upper left.

On the display screen 1701, a first virtual object group arranged in association with a first real space is displayed. A second virtual object group arranged in association with the second real space is displayed in the display screen 1702. A third virtual object group arranged in association with the third real space is displayed on the display screen 1703. A fourth virtual object group arranged in association with a fourth real space is displayed on the display screen 1704.

On the display screen 1704 for displaying the fourth virtual object group, virtual objects 1705 and 1706 are arranged in association with a fourth real space formed by a landscape of a building group, a vehicle, a person, or the like.

When a desired virtual object is to be selected, the user arranges the virtual object groups arranged in association with the respective real spaces shown in fig. 19 in a reduced size, and searches for the virtual object groups from the multi-display screens 1701 to 1704. Then, the display screen on which the selected virtual object exists is changed to a normal full display screen, and the desired virtual object is easily viewed.

This makes it possible to easily select a real space in which a desired virtual object is arranged from a multi-display screen that displays a virtual object group arranged in association with each real space. In addition, only the virtual object group arranged in association with the selected virtual object can be displayed, and the desired virtual object can be viewed in an easy-to-view state.

In fig. 19, a 4-fold display is formed to display 4 display screens, but various modifications such as a 9-fold display and a 16-fold display may be employed as the multiple display method.

As shown in fig. 20, the display may be arranged in a shape of a plurality of spaces in a plan view. In this case, the video processing unit 153 generates an image of a shape in a plan view, and the display control unit 151 displays the image generated by the video processing unit 153 on the display 122. This makes it possible to easily visualize a space and easily select a space including a desired virtual object.

When generating an overhead view of a space, the user's own avatar is displayed at the user's current position in the image, thereby making it possible to easily grasp the space in which the user is present.

When another user is present in the displayed spaces, coordinate information in the space where the other user is present is acquired by wireless communication or the like, and displayed at a corresponding coordinate position in the space viewed from above.

Similarly, the presence of the user himself or another user may be displayed on the multi-display image by an avatar, a mark, or the like. This makes it possible to easily grasp the positional relationship between the space and each user.

The spatial position of the user can be determined using various sensor information, for example, the distance from a wall obtained by a depth sensor, a captured image by the camera unit 111, or the like. The image of the displayed avatar is recorded in the memory 124 as the advance information data 127. The video processing unit 153 then synthesizes the spatial video with the information data 127 stored in the memory 124.

The coordinate information of the other user is obtained from various sensors, cameras, and the like of the information terminal carried by the other user, and is transmitted from a communication interface of the information terminal. The head-mounted information processing apparatus 100 directly receives the coordinate information through the communication interface 1804. Alternatively, the head-mounted information processing apparatus 100 may receive the coordinate information via a servo not shown.

In fig. 2 to 19, a description has been given of an example in which a virtual object is placed in a real space in which a user is present, and a similar operation can be performed even when a virtual object is placed in a virtual space in which a space operated by a user such as VR is itself a virtual space.

In addition, the virtual object can be used as a reminder or the like. The virtual object generation processing unit 155 can generate a virtual object in a space other than the current space.

By utilizing this function, the reminder can be used. For example, the virtual object generation processing unit 155 generates a virtual object of an umbrella and arranges the virtual object in the entrance while the person is in the living room, thereby providing a reminder for not forgetting the umbrella when the person is out.

Further, a request instruction by the user using the operation input interface 121, an operation by the head-mounted information processing device 100, information indicating a display operation, and the like may be displayed on the display 122.

Alternatively, the information may be notified to the user by sounding the user with sound from the headphones 120, generating vibration by the vibration generating unit 117 in close contact with the user, and generating a stimulus by the stimulus generating unit 146.

This makes it possible to reliably notify the user of the active state of the head-mounted information processing apparatus and to make the user recognize the active state.

The input operation of the operation and the display operation performed by the head-mounted information processing device 100 is not limited to the input operation by the operation input interface 121, and may be performed by detecting the movement of the hand of the user by the camera unit 111 or the like, for example, and taking in the input operation based on the movement.

According to the above, even if the virtual object is in another real space, viewing and manipulation of the virtual object can be easily performed.

(embodiment mode 2)

< example of head-mounted display System >

Fig. 21 is a block diagram showing an example of the configuration of a head-mounted display system 1801 according to embodiment 2.

As shown in fig. 21, the head mounted display system 1801 is configured by a head mounted information processing apparatus 100 and a virtual object generation server apparatus 1802. The head mounted information processing apparatus 100 and the virtual object generation server 1802 are connected to a network 1803, respectively.

The head-mounted information processing apparatus 100 shown in fig. 21 is newly provided with a communication interface 1804 and a transmitting/receiving antenna 1805 in addition to the configurations of the functional blocks denoted by the same reference numerals as those shown in fig. 1. On the other hand, the head-mounted information processing apparatus 100 in fig. 21 is not provided with the virtual object generation processing unit 155.

The virtual object generation server 1802 is configured with a virtual object generation processing unit 1811, a memory 1812, a control unit 1813, a communication interface 1814, a transmission/reception antenna 1815, and the like. Each function module in the virtual object generation server 1802 is connected to each other via a bus 1820. In fig. 21, the same processing portions as those of the embodiment of fig. 1 are denoted by the same reference numerals, and the description thereof is omitted.

In the head-mounted display system 1801, the virtual object generation processing unit 1811 included in the virtual object generation server device 1802 generates a virtual object.

The memory 1812 stores the virtual object generated by the virtual object generation processing section 1811. The communication interface 1814 transmits the virtual object held in the memory 1812 from the transmitting/receiving antenna 1815 to the head-mounted information processing apparatus 100 via the communication network, i.e., the network 1803. The head-mounted information processing apparatus 100 receives the virtual object transmitted via the network 1803.

In fig. 21, the display processing itself of the virtual object in the head-mounted information processing apparatus 100 is the same as that in embodiment 1, but the generation of the virtual object is performed by a virtual object generation server apparatus 1802 that is another apparatus different from the head-mounted information processing apparatus 100, which is different from embodiment 1.

In the virtual object generation server device 1802, the memory 1812 is a nonvolatile semiconductor memory such as a flash memory, as in the memory 124 of the head-mounted information processing device 100.

The memory 1812 stores various programs to be used by the control section 1813 of the virtual object generation server apparatus 1802, virtual objects generated, and the like. The communication interface 1814 is a communication interface for communicating with the head-mounted information processing apparatus 100 via the network 1803, and transmits and receives information to and from the head-mounted information processing apparatus 100.

The control unit 1813 is configured by, for example, a CPU or the like, and executes programs such as an OS and an operation control application stored in the memory 1812 to control the respective functional modules and control the entire virtual object generation server device 1802.

The control unit 1813 controls generation of a virtual object by the virtual object generation processing unit 1811, saving of the generated virtual object in the memory 1812, and the like. The control unit 1813 controls the generated virtual object to be transmitted to the head-mounted information processing apparatus 100 in response to a transmission/output request for the virtual object from the head-mounted information processing apparatus 100.

Thus, the virtual object can be generated not by the head mounted information processing apparatus 100 but by the virtual object generation server apparatus 1802 separate from the head mounted information processing apparatus 100.

As a result, the amount of virtual object information to be processed can be increased in size. At the same time, virtual objects requested by the respective head-mounted information processing apparatuses 100 can be generated and distributed in a plurality of places.

According to the above, it is possible to easily view and operate virtual objects arranged in another real space by a plurality of head-mounted information processing apparatuses 100 at the same time.

The present invention has been described specifically based on the embodiments, but the present invention is not limited to the above embodiments, and various modifications can be made without departing from the scope of the present invention.

The present invention is not limited to the above-described embodiments, and includes various modifications. For example, the above-described embodiments are the detailed descriptions explained for easy understanding of the present invention, and are not necessarily limited to having all the configurations explained.

In addition, a part of the structure of one embodiment may be replaced with the structure of another embodiment. In addition, the configuration of another embodiment may be added to the configuration of one embodiment. Further, a part of the configuration of each embodiment can be added, deleted, or replaced with another configuration.

In addition, a part or all of the above-described structures, functions, processing units, processing functions, and the like may be realized by hardware, for example, by designing an integrated circuit. The above-described structures, functions, and the like may be realized by software by interpreting and executing a program for realizing each function by a processor. Information such as programs, tables, and files for realizing the respective functions can be stored in a memory, a hard disk, a recording device such as an SSD (Solid State Drive), or a recording medium such as an IC card, an SD card, or a DVD.

The control lines and the information lines are considered necessary for the description, and are not necessarily all the control lines and the information lines in the product. In practice, almost all structures can be considered to be connected to each other.

Description of reference numerals

100 head-mounted information processing device

111 video camera part

112 right eye sight line detection unit

113 left eye sight line detector

117 vibration generating part

118 ambient sound microphone

119 human voice microphone

120 earphone

121 operation input interface

122 display

124 memory

125 control part

140 bus

142 depth sensor

143 acceleration sensor

144 gyroscope

145 geomagnetic sensor

146 stimulus generating part

151 display control part

152 data management section

153 image processing part

154 virtual object posture operation processing unit

155 virtual object generation processing unit

156 virtual object transformation operation processing unit

1801 head mounted display system

1802 virtual object creation server device

1803 network

1804 communication interface

1805 Transmit-receive antenna

1811 virtual object generation processing unit

1812 memory

1813 control part

1814 communication interface

1815 transmit and receive antennas.

42页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于提供多模式界面的基于移动设备的雷达系统及其方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类