Image display device and image display method

文档序号:174274 发布日期:2021-10-29 浏览:22次 中文

阅读说明:本技术 影像显示装置和影像显示方法 (Image display device and image display method ) 是由 中出真弓 伊藤保 桥本康宣 于 2019-02-26 设计创作,主要内容包括:本发明的便携型的影像显示装置(1)包括检测用户(2)的位置和方向的传感器(50)、获取用户的移动轨迹的信息的轨迹信息获取部、保存用户的轨迹信息和虚拟化身的信息的存储部(40)、以及用虚拟化身显示用户的移动轨迹的显示部(72)。控制部生成虚拟化身(3),并用传感器求取当前的用户的视野区域,基于存储部中保存的用户的轨迹信息,与当前的用户的视野区域相应地在用户的移动轨迹上配置虚拟化身(3)并将其显示于显示部(72)。使虚拟化身(3)沿着用户的移动轨迹移动时,使其从时间上较新的轨迹信息向较旧的轨迹信息移动。(A portable video display device (1) is provided with a sensor (50) for detecting the position and direction of a user (2), a trajectory information acquisition unit for acquiring information on the trajectory of the user, a storage unit (40) for storing the trajectory information of the user and information on an avatar, and a display unit (72) for displaying the trajectory of the user with the avatar. The control unit generates an avatar (3), determines the current user's field of view area using the sensor, and arranges the avatar (3) on the user's movement trajectory in accordance with the current user's field of view area based on the trajectory information of the user stored in the storage unit, and displays the same on the display unit (72). When the avatar (3) is moved along the movement trajectory of the user, the avatar is moved from the temporally new trajectory information to the temporally old trajectory information.)

1. An image display apparatus that can be carried by a user and that displays a movement locus of the user by an image, comprising:

a sensor for detecting the position and direction of the user carrying the image display device;

a trajectory information acquiring unit that acquires information on a movement trajectory of the user from a detection result of the sensor;

a storage unit for storing the trajectory information of the user acquired by the trajectory information acquisition unit and information indicating an avatar that is a virtual image of the user;

a display unit for displaying the movement trajectory of the user by the avatar; and

a control unit for controlling the track information acquisition unit and the display unit,

the control part is used for controlling the operation of the motor,

generating the avatar from the avatar information stored in the storage unit, and finding a current visual field area of the user using the sensor,

the generated avatar is arranged on the movement trajectory of the user in accordance with the current visual field area of the user based on the trajectory information of the user stored in the storage unit, and is displayed on the display unit.

2. The image display device of claim 1, wherein:

the control unit moves the avatar from the temporally new trajectory information to the temporally old trajectory information when moving the avatar along the movement trajectory of the user.

3. The image display device according to claim 2, wherein:

the track information acquiring unit acquires information on the movement track of the user at predetermined time intervals and stores the position coordinates of each track point in the storage unit,

the control unit reads the position coordinates of each track point from the storage unit and arranges the avatar at the position coordinates, for each unit time or for a time obtained by multiplying the unit time by an arbitrary coefficient.

4. The image display device according to claim 3, wherein:

the trajectory information acquisition unit stores the position coordinates of each trajectory point in the storage unit by the difference between the position coordinates of the trajectory point and the position coordinates of the first 1 trajectory points in time, and stores the position coordinates of at least the trajectory point of the end point in the storage unit by an absolute value.

5. The image display device according to claim 2, wherein:

the control unit displays a facing direction of a body when the avatar is moved in a direction opposite to a traveling direction of the avatar.

6. The image display device according to claim 2, wherein:

the image display device is used by the user wearing the device on the head,

the height and direction of the head of the user wearing the image display device are also detected by the sensor,

the trajectory information acquisition unit stores information of the height and direction of the head of the user detected by the sensor in the storage unit as the trajectory information,

the control unit determines the posture of the avatar and the facing direction of the face by comparing the height information of the user with the height information of the user based on the height information and the direction information of the head of the user stored in the storage unit when the avatar is generated and displayed.

7. The image display device according to claim 2, wherein:

the control unit performs a guidance process for changing a visual field direction of the user or moving the user to a position where the avatar can be seen, when the avatar cannot be placed in the current visual field region of the user.

8. The image display device according to claim 3, wherein:

also comprises a camera part for shooting the external scene,

storing the image data captured by the image capturing unit in the storage unit in association with an image capturing point as an image capturing position,

the control unit displays the image of the shooting data stored in the storage unit on the display unit when the position of the avatar is located at the shooting point.

9. The image display device of claim 8, wherein:

the control unit stores, in the storage unit, an image captured at a capturing point where a direction of a movement trajectory of the user has changed among the capturing data captured by the imaging unit,

the control unit displays the image of the image data stored in the storage unit on the display unit when the position of the avatar is at an image capturing point at which the direction of the movement trajectory of the user has changed.

10. The image display device according to claim 3, wherein:

also comprises a camera part for shooting the external scene,

the display unit displays a current image being captured by the imaging unit, and,

the control unit displays the virtual avatar in an overlaid manner on the basis of the movement trajectory of the user, corresponding to the image of the external scene being displayed on the display unit.

11. An image display system in which a plurality of image display devices carried by a plurality of users are connected to each other and movement trajectories of the plurality of users are displayed as images in the plurality of image display devices, characterized in that:

the plurality of image display devices respectively include:

a sensor for detecting the position and direction of the user carrying the image display device;

a trajectory information acquiring unit that acquires information on a movement trajectory of the user from a detection result of the sensor;

a storage unit for storing the trajectory information of the user acquired by the trajectory information acquiring unit and information indicating virtual images of the user and other users, that is, virtual avatars of the user and the other users;

a communication processing unit for transmitting and receiving trajectory information of each user to and from other image display devices carried by other users;

a display unit for displaying the movement locus of each user by each avatar; and

a control unit for controlling the trajectory information acquisition unit, the communication processing unit, and the display unit,

the control part is used for controlling the operation of the motor,

an avatar of each user is generated from the avatar information stored in the storage unit, and the current visual field area of the user is obtained by the sensor,

the display unit may be configured to display the generated avatars on the movement trajectories of the user and the other users in accordance with the current visual field area of the user, based on the trajectory information of the user stored in the storage unit and the trajectory information of the other users received by the communication processing unit.

12. An image display method for displaying a movement trajectory of a user using an image, comprising:

detecting the position and the direction of the user and acquiring the information of the movement track of the user;

a step of storing the acquired trajectory information of the user in a storage unit;

generating a virtual image representing the user, i.e., a virtual avatar;

a step of obtaining a current visual field region of the user; and

and arranging the generated avatar on a movement trajectory of the user according to a current visual field area of the user based on the trajectory information of the user stored in the storage unit, and displaying the arranged avatar on a display unit.

13. The image display method of claim 12, wherein:

in the displaying step, when the avatar is moved along the movement trajectory of the user, the avatar is moved from the temporally newer trajectory information to the older trajectory information.

Technical Field

The present invention relates to a portable video display device and a video display method.

Background

In recent years, portable video display devices such as smart phones have been widely used. Among them, a Head Mounted Display (hereinafter, referred to as HMD) worn on the Head of a user displays a video image in a real space on a glasses-type Display screen while superimposing the video image on an Augmented Reality (AR) video image generated by a computer or the like. Further, by mounting a sensor to the HMD, information acquired by the sensor can be displayed on the display screen as an AR image. For example, patent document 1 discloses a configuration in which a mobile terminal for drawing a user's action history includes a terminal information acquisition unit for acquiring position information of the terminal itself and the like, a camera unit for generating a camera image obtained by shooting the surroundings, a drawing history calculation unit for calculating action history drawing information (avatar) to be displayed based on the action history acquired in advance and the imaging range of the camera unit, an image synthesis unit for generating a synthesized image in which the action history drawing information is drawn in the camera image, and a display unit for displaying the synthesized image.

Documents of the prior art

Patent document

Patent document 1: international publication No. 2011/093031

Disclosure of Invention

Problems to be solved by the invention

When a user moves in a complicated route, an example of forgetting a place to pass by or a return route is common among the elderly. In such a case, there is a method of guiding a representative route to return to a departure point as long as the departure point is set, as in the case of a navigation device, but the guidance is not necessarily performed in the same manner as a route actually traveled by a user, and it cannot be expected that the route in the middle is reproduced accurately.

In patent document 1, when a user instructs recording of an action history, the action history is registered in an action history server. Then, the action history of the user requested is acquired from the action history server, and the avatar is superimposed and displayed on the camera image in accordance with the position information of the user. However, in patent document 1, it is assumed that the action history of any other user is acquired and displayed, and it is not particularly considered to display the action history of the user himself/herself. In addition, since the avatar is displayed in the order of movement at the point where the user moves when the action history is displayed, the avatar is not suitable as a tool for guiding a return route from the current position for the user who has forgotten the return route.

The invention aims to provide an image display device for guiding a return path along a path along which a user actually moves.

Means for solving the problems

In order to solve the above problem, an image display device according to the present invention includes: a sensor for detecting the position and direction of a user carrying the image display device; a track information acquiring unit that acquires information on a movement track of a user from a detection result of the sensor; a storage unit for storing the track information of the user acquired by the track information acquisition unit and information of an avatar representing a virtual image of the user; a display unit for displaying the movement locus of the user by the avatar; and a control unit for controlling the trajectory information acquisition unit and the display unit. Here, the controller is configured to generate an avatar from the avatar information stored in the storage, obtain the current user's visual field area using the sensor, and arrange the avatar on the user's movement trajectory in accordance with the current user's visual field area based on the user's trajectory information stored in the storage, and display the avatar on the display. The control unit moves the avatar from the temporally new trajectory information to the temporally old trajectory information when moving the avatar along the movement trajectory of the user.

Effects of the invention

According to the present invention, it is possible to easily understand a return route for a user who has forgotten a place and a return route that are passing on the way when moving.

Drawings

Fig. 1 is an external view of the HMD according to example 1.

Fig. 2 is a block diagram showing an internal structure of the HMD.

Fig. 3 is a diagram showing an example of a connection configuration between the communication processing unit and the external device.

Fig. 4 is a diagram showing a configuration of a functional module of the HMD.

Fig. 5 is a diagram showing an example of trajectory collection by a user.

Fig. 6 is a flowchart showing the trajectory collection processing.

Fig. 7A is a track information holding table for holding position information.

Fig. 7B is a difference track information storage table that stores position information by difference.

Fig. 7C is a start point/end point position coordinate table for storing position information at the start time/end time.

Fig. 8A is a still image data save table that saves still images.

Fig. 8B is a moving image data storage table for storing moving images.

Fig. 9A is a flowchart showing the whole of the avatar display processing.

Fig. 9B is a flowchart showing the help processing in fig. 9A.

Fig. 10A is a diagram showing an example of the steering guidance display.

Fig. 10B is a diagram showing an example of the moving direction guidance display.

Fig. 11A is a diagram showing an example of display of an avatar.

Fig. 11B is a diagram showing an example of display in which the avatar has changed the facing direction.

Fig. 12 is a table showing an example of a voice command used for the operation of the HMD.

Fig. 13 is a diagram illustrating an operation instruction by the user's finger.

Fig. 14A is a two-dimensional track information storage table storing position information according to embodiment 2.

Fig. 14B is a two-dimensional start/end position coordinate table at the start/end.

Fig. 15 is a schematic diagram of an avatar display in embodiment 3.

Fig. 16 is a schematic diagram of an avatar display in embodiment 4.

Fig. 17A is a track information holding table in embodiment 7.

Fig. 17B is a diagram showing an example of display of an avatar.

Fig. 18 is a diagram showing an external appearance of a smartphone of embodiment 8.

Fig. 19 is a diagram showing a configuration of an image display system in which a plurality of HMDs are connected according to embodiment 9.

Fig. 20 is a diagram showing an example of display of a plurality of avatars.

Detailed Description

Hereinafter, the embodiments of the present invention will be described mainly with reference to an example of a Head Mounted Display (HMD) as a head mounted video display device.

Example 1

Fig. 1 shows an external view of an HMD according to example 1. HMD1 is in the form of glasses, and has a display unit 72 for the user to view images and an imaging unit 71 for taking an external view arranged on the front surface, and various processing units described later housed in temple portions (temples) 91. Further, a part of the processing unit in the temple portion 91 may be housed in another case separately from the HMD main body, and connected to the HMD main body by a cable.

Fig. 2 is a block diagram showing an internal configuration of the HMD 1. The HMD1 includes a main control unit 10, a system bus 30, a storage unit 40, a sensor unit 50, a communication processing unit 60, an image processing unit 70, an audio processing unit 80, and an operation input unit 90.

The main control unit 10 is a microprocessor unit that controls the whole HMD1 according to a predetermined operation program. The system bus 30 is a data channel for transmitting and receiving various commands, data, and the like between the main control unit 10 and each of the constituent modules in the HMD 1.

The storage unit 40 stores various programs 41 for controlling the operation of the HMD1, operation setting values, detection values from a sensor unit described later, and various data 42 such as an object including contents, and has a work area 43 used for the operation of the various programs. In this case, the storage unit 40 can store an operation program downloaded from a network, various data generated by the operation program, and contents such as downloaded moving images, still images, and audio. In addition, data such as a moving image and a still image captured by the imaging unit 71 can be stored. The storage unit 40 needs to hold stored information even in a state where power is not supplied from the outside to the HMD 1. Thus, for example, semiconductor element memories such as flash ROM and ssd (solid State drive), disk drives such as hdd (hard Disc drive), and the like are used. The operating programs stored in the storage unit 40 can be updated and expanded by download processing from the server devices on the network.

The sensor unit 50 is configured by a gps (global Positioning system) sensor 51, a geomagnetic sensor 52, a distance sensor 53, an acceleration sensor 54, a gyro sensor 55, a height sensor 56, and the like, in order to detect various states of the HMD 1. The sensor groups are used to detect the position, tilt, orientation, motion, altitude, etc. of HMD 1. In addition, other sensors such as an illuminance sensor and a proximity sensor may be provided.

The communication processing unit 60 includes a lan (local Area network) communication unit 61 and a telephone network communication unit 62. The LAN communication unit 61 is connected to a network such as the internet via an access point or the like, and transmits and receives data to and from each server device on the network. Connection to an access point or the like may be performed by wireless connection such as Wi-Fi (registered trademark).

The telephone network communication unit 62 performs telephone communication (conversation) and data transmission/reception by wireless communication with a base station or the like of a mobile telephone communication network. Communication with a base station or the like may be performed by a W-cdma (wireless Code Division Multiple access) scheme, a GSM (Global System for Mobile communications) scheme, an lte (long Term evolution) scheme, or another communication scheme. The LAN communication unit 61 and the telephone network communication unit 62 are provided with an encoding circuit, a decoding circuit, an antenna, and the like, respectively. The communication processing unit 60 may further include another communication unit such as a Bluetooth (registered trademark) communication unit and an infrared communication unit.

The image processing unit 70 includes an imaging unit 71 and a display unit 72. The imaging unit 71 is a camera unit that converts light input from a lens into an electric signal by using an electronic device such as a ccd (charge Coupled device) or cmos (complementary Metal Oxide semiconductor) sensor, and inputs image data of an object of an external scene. The display unit 72 is a display device using a transmissive display such as a laser projector or a half mirror, and provides image data to a user of the HMD 1.

The audio processing unit 80 includes an audio input/output unit 81, an audio recognition unit 82, and an audio decoding unit 83. The voice input of the voice input/output unit 81 is a microphone, and converts a user's voice or the like into voice data and inputs the voice data. The audio output of the audio input/output unit 81 is a speaker, and outputs necessary audio information to the user. The voice recognition unit 82 analyzes the input voice information and extracts an instruction command and the like. The audio decoding unit 83 performs decoding processing and audio synthesis processing of the encoded audio signal.

The operation input unit 90 is an instruction input unit that inputs an operation instruction to the HMD 1. The operation input unit 90 is configured by operation keys or the like in which button switches or the like are arranged, but may include other operation devices. For example, the HMD1 may be operated by another portable terminal device connected by wired or wireless communication using the communication processing unit 60. The HMD1 may be operated by a voice command instructed by an operation using the voice recognition unit 82 of the voice processing unit 80.

In addition, the HMD1 shown in fig. 2 includes a plurality of structures that are not essential to the present embodiment, and even if the HMD1 does not include any of these structures, the effects of the present embodiment are not impaired. On the other hand, a configuration not shown in the figure, such as a digital broadcast reception function and an electronic money settlement function, may be further added.

Fig. 3 is a diagram showing an example of a connection configuration between the communication processing unit 60 and an external device. The LAN communication unit 61 of the HMD1 is connected to a network 5 such as the internet via a wireless router 4 serving as an access point. The server 6 is connected to the network 5, and data is transmitted and received via the LAN communication unit 61 of the HMD 1.

Fig. 4 is a diagram showing a configuration of functional modules of HMD 1.

The overall control process of the HMD1 is mainly executed by the main control unit 10 using the various programs 41 and the various data 42 in the storage unit 40. The processing in the HMD1 includes trajectory collection processing (S100) for collecting and storing information on the trajectory of the user 'S movement, and avatar display processing (S200) for displaying an avatar representing the user' S virtual image based on the stored trajectory information (each indicated by a dotted line and a dashed line).

In the trajectory collection processing (S100), information from various sensors of the sensor unit 50 is acquired by the various sensor information acquisition function 11, and the acquired information from the various sensors is converted into trajectory information that is easy to handle internally by the trajectory information processing function 12. The track information after the conversion processing is stored by the track information storage function 13. When an imaging instruction is given by the user during the acquisition of the trajectory information, the image is captured by the imaging unit 71 of the image processing unit 70. The imaging processing in this case is performed by the imaging data acquisition function 14, and the acquired imaging data is stored in the imaging data storage function 15 in association with the trajectory information.

In the avatar display processing (S200), avatar information of the user stored in advance in the avatar information storage function 16 is read, and the avatar generated by the avatar generation function 17 is displayed by the avatar display function 18. The display position of the avatar is determined based on the trajectory information stored in the trajectory information storage function 13. However, the field-of-view calculation function 19 determines whether or not the display position of the avatar is present in the field of view of the HMD1, and if not, the avatar is not displayed. The shot data stored in the shot data storage function 15 is reproduced by the shot data reproduction function 20 according to the path trajectory.

The storage functions such as the trajectory information storage function 13, the shot data storage function 15, and the avatar information storage function 16 may be stored in the external server 6 via the LAN communication unit 61 of the communication processing unit 60, as shown in fig. 3.

Fig. 5 is a diagram showing an example of trajectory collection by a user. In this example, the user 2 wearing the HMD1 moves (indicated by ● symbols) in the order of the track points 501, 502, 503, 504. Further, at the locus point 503 (turn) where the moving direction is changed by 90 degrees, the object 510 is imaged by the imaging section 71. Therefore, the trace point 503 becomes a shooting point. The object to be photographed here is a place where the user passes, a place where the user turns, walks by mistake, an animal or plant, a building, or the like, which is seen at the place, and the photographed image is reproduced when the path trajectory is reproduced later, which is determined by the user's will.

Fig. 6 is a flowchart showing the trajectory collection processing (S100) of the HMD1, taking the trajectory diagram of fig. 5 as an example. The processing flow is stored in various programs 41 in the storage unit 40.

S110: the trajectory collection process is started by the instruction of the user 2 (S100). The indication of the start of the trajectory collection is made by the voice of the user 2 in the present embodiment. For example, when the voice command "track start" is spoken, the voice recognition unit 82 determines that the track collection process is to be started.

S111: and initializing the point number of the track point. Here, 0 is set as the initial value of the point number p, and the point number p at the start time of the trajectory collection process is set to 0.

S112: a timer is started. The timer performs timing using a clock built in the main control section 10. The timer is used to collect data of a user's track points at intervals of a certain time (unit time).

S113: the position of the HMD1 is detected by information from the sensor unit 50 (various sensor information acquiring function 11), converted into a format for storing data (trajectory information processing function 12), and stored in various data 42 in the storage unit 40 (trajectory information storing function 13). The position on the plane of HMD1 is detected by GPS sensor 51 of sensor unit 50. By receiving radio signals from a plurality of GPS satellites, the global position coordinates (longitude, latitude) of the HMD1 can be detected. The height can be detected by the height sensor 56 of the sensor unit 50. For example, the air pressure is measured and the height is calculated.

The position information of the HMD1 can be obtained by using global position information such as information transmitted from Wi-Fi (registered trademark), Bluetooth (registered trademark), or a mobile base station, in addition to GPS information from GPS satellites, and therefore, when the GPS information cannot be obtained, the position information of the HMD1 is obtained using the information. Of course, other information may be combined with the GPS information. Hereinafter, information for acquiring global position information is referred to as "GPS information or the like".

In addition, if the global position information is always acquired with high accuracy, the trajectory information of the user can be arranged in the spatial coordinate system using the global position information. On the other hand, a unique spatial coordinate system (local coordinate system) with the trajectory acquisition start position as the origin may be generated, and the change in position may be calculated from the movement distance and the movement direction of the user detected by the acceleration sensor 54, the gyro sensor 55, and the like of the sensor unit 50 and arranged in the local coordinate system. Then, the global position information is acquired at a location where the global position information can be acquired with high accuracy, and is associated with the position information of the local coordinate system, whereby the present invention can be applied to, for example, a case where the trajectory is not continuously acquired and a case where the trajectory information of another person is viewed.

S114: it is determined whether or not the user issues a shooting instruction, and the process proceeds to S115 when the user issues the shooting instruction, and proceeds to S116 when the user does not issue the shooting instruction. In addition, the imaging function of the imaging section 71 prepares 2 imaging modes of still image imaging and moving image imaging. To distinguish them, for example, a voice command "photograph" is used as a photographing instruction of a still image, and a voice command "photograph start" is used as a photographing instruction of a moving image. In addition, a voice command "shooting end" is used for the moving image as an instruction of shooting end.

S115: the subject positioned in front of the HMD1 is photographed by the image pickup unit 71 of the image processing unit 5 (the photographed data acquiring function 14), and the photographed data is stored as various data 42 in the storage unit 40 (the photographed data storing function 15).

S116: it is determined whether or not the user has instructed the end of the trajectory information acquisition process. For example, it is determined whether or not an end instruction of the trajectory collection process is issued by the voice command "trajectory end". When the termination is instructed, the process proceeds to S117 to terminate the track information acquisition process. If the end of the trajectory information acquisition process is not instructed, the process proceeds to S118.

S118: it is judged whether the unit time has elapsed. Specifically, it is determined whether or not the timer has exceeded the unit time. When the unit time has elapsed, the process proceeds to S119, and when the unit time has not elapsed, the process returns to S114.

S119: the point number p is added with 1, and the process returns to S112 to reset the timer and start again.

With the above trajectory collection processing (S100), the trajectory points (position information) of the user can be collected at regular time intervals. For example, by setting the unit time of the timer to 1 second, the trajectory of the HMD1 can be saved every 1 second. In parallel with this, the subject in the middle of the imaging can be imaged and stored in accordance with the instruction of the user.

Here, 2 modes will be described for the trajectory information data (data table) stored in the storage unit 40 by the trajectory collection processing (S100).

Fig. 7A shows a trajectory information storage table 710 for storing position information while holding measurement values. The items include a point number 711 indicating the order of each track point and position coordinates 712 corresponding to the track point. The position coordinates 712 use position information (X, Y) of a plane from GPS information or the like and a value of the height (Y) obtained by the height sensor 56. The point number 0 is the position coordinate (X) at the start of the trajectory collection process (S110)0,Y0,Z0) The dot number 1 is a position coordinate (X) after a unit time (here, 1 second)1,Y1,Z1). The point number k is a position coordinate (X) when an imaging instruction is given by the user (S114) k seconds laterk,Yk,Zk). The point number n is the position coordinate (X) when the track collection processing is finished (S117) n seconds after the startn,Yn,Zn)。

Since the position coordinates 712 in the data table are fixed-length data, the data of the dot number 711 can be omitted if recording is performed in time series. That is, the data of the position coordinates of the target point number p can be searched for by the fixed-length multiplier value p.

Here, when the position information cannot be acquired from the GPS information or the like, the relative value information (difference information) of the acceleration sensor 54, the gyro sensor 55, and the like of the sensor unit 50 is temporarily stored. Then, at a stage where position information (absolute value information) from GPS information or the like can be acquired at any one locus point, the relative value temporarily stored is corrected to an absolute value, thereby coping with the position information.

Fig. 7B shows a difference track information storage table 720 in which position information is stored for each difference value. When the variation in the measured value between the trace points is small, the measured value is not held but stored in a difference (variation), and the amount of data to be stored can be reduced, which is more practical.

As an item, the point number 721 and the difference position coordinate 722 corresponding to the locus point are included. In the difference position coordinates 722, a difference value between the position coordinate of the previous track point and the position coordinate of the track point at the current time is described. That is, the difference (Δ X, Δ Y) of the plane position coordinates is described by the distance in the longitude direction and the distance in the latitude direction between 2 locus points. The difference (Δ Z) in the height direction is also described as a distance.

The point number 1 indicates the difference position coordinates (Δ X1, Δ Y1, Δ Z) 1 second after the start of the trajectory collection process1) The dot number k represents the differential position coordinates (Δ Xk, Δ Yk, Δ Z) after k secondsk) The dot number n indicates the difference position coordinates (Δ Xn, Δ Yn, Δ Z) at the end of the trajectory collection process (n seconds after the start)n). The relationship with the value of the position coordinate 712 in fig. 7A (track information storage table 710) is:

ΔXp=Xp-Xp-1,ΔYp=Yp-Yp-1,ΔZp=Zp-Zp-1

fig. 7C shows a start point/end point position coordinate table 730 that stores position information at the start/end of the trajectory collection. This table is necessary when calculating the display position (absolute position) of the avatar using the differential trajectory information holding table 720 of fig. 7B.

The item is composed of a start point/end point distinction 731 that distinguishes whether the track is a start point or an end point of the track collection, and position coordinates 732 that represent each piece of position information in absolute values. Of course, the start point is equal to the position coordinate (X) of point number 0 of fig. 7A (track information holding table 710)0,Y0,Z0) The end point is equal to the position coordinate (X) of the point number nn,Yn,Zn)。

Thus, if the position information (absolute value) 732 of the start point and the end point is stored, the difference track information storage table 720 of fig. 7B can be converted into the track information storage table 710 of fig. 7A. In particular, by saving the position coordinates (X) of the end pointn,Yn,Zn) The display of the avatar can be performed efficiently. The reason for this will be explained.

As described later, the avatar is displayed retrospectively from the end point (point number n) of the trajectory. At this time, when the position coordinates of the end point at which the display is started are not known, the position coordinates (X) of the start point in fig. 7C must be calculated0,Y0,Z0) Plus the full differential position coordinates 722 of fig. 7B. If the position coordinates (X) of the end point are knownn,Yn,Zn) Then the avatar is first displayed at the location coordinates. Next, the position of the avatar can be displayed by subtracting the differential position coordinates (Δ Xn, Δ Yn, Δ Z) at the point number n of FIG. 7C from the current position coordinatesn) And is easy to calculate. And repeating the same steps thereafter.

These pieces of track information (the track information storage table 710, the difference track information storage table 720, and the start/end position coordinate table 730) are stored not only in the various data 42 in the storage unit 40, but also in the server 6 outside via the network 5 for each track point or when the table of the track information is completed, as described with reference to fig. 3. By storing data in the server 6, the amount of data stored in the HMD1 can be reduced.

Next, storage of the shot data will be described. The shot data corresponds to both a still image and a moving image.

Fig. 8A shows a still image data save table 810 that saves still images. The still image data storage table 810 includes a point number 811 indicating a track point when a shooting instruction is issued, a shooting direction 812 indicating a shooting direction, a shooting data length 813, and shooting data 814. In this example, at a point of track point k (point number k), HMD1 is at ΘkIs shot in the direction of (1), the length M of the shot data iskShot data D ofkStored in storageVarious data 42 in the storage unit 40 are stored (S115).

By storing the trajectory information storage table 710 in association with the still image data storage table 810 of the present example, it is possible to reproduce a still image captured in synchronization with the movement position of the HMD1 per elapsed unit time (here, 1 second).

Fig. 8B shows a moving image data storage table 820 for storing moving images. The moving image data storage table 820 is composed of a point number 821 indicating a locus point when an image pickup instruction is issued, an image pickup direction 822 at the start of image pickup, an image pickup data length 823, and image pickup data 824, as in the case of the still image shown in fig. 8A. Further, in the case of a moving image, shooting time (start and end) 825 is described. This is to cope with a case where moving image shooting is performed across a plurality of track points. If a moving image is reproduced per unit time (every 1 second), it can be synchronized with the time at which the avatar is displayed.

In this example, at a point of track point k (point number k), HMD1 is at ΘkIn the direction of (a) from a start time TksStarting shooting and ending shooting to the end time TkeData length M of shotkShot data D ofkThe data is stored in the various data 42 in the storage unit 40 (S115). The moving image data can be reproduced on the entire display screen of the HMD, but in this example, the moving image capturing start time T is displayed on a reduced sub-screenksUntil the moving image photographing end time TkeReproduction is performed.

As described with reference to fig. 3, the information on the shot data (the still image data storage table 810 and the moving image data storage table 820) can be stored in the external server 6 via the network 5, thereby reducing the amount of data to be stored in the HMD 1.

Next, the avatar display process (S200) of the HMD1 will be described. The processing flow is stored in various programs 41 in the storage unit 40. The avatar is displayed by extracting avatar information from the avatar information storage function 16 (avatar generation function 17) and displaying the avatar on the display 72 of the HMD1 (avatar display function 18). The shape of the avatar is the same size as the user, and the height of the avatar is set by the user himself. For convenience, the height obtained by adding 10cm to the distance from the floor (the height from the floor of HMD 1) can be used as the height by the distance sensor 53 of the sensor unit 50 in the standing state. The display position of the avatar is displayed superimposed on the background image on the display unit 72 of the HMD in accordance with the position information stored in the trajectory information storage table 710 or the like.

The trajectory information and the imaging data used are used when they are stored in the various data 42 in the HMD, but are acquired from the server 6 and used when they are stored in the external server 6 via the LAN communication unit 61 of the communication processing unit 60.

In this embodiment, based on the track points of the user, the avatar is retrospectively displayed from the track information that is newer in time to the track information that is older. By adopting the display sequence of the temporal backtracking in this way, the user can return along the same route that the user actually passes from the end point (current point) to the start point (departure point).

Fig. 9A is a flowchart showing the whole of the avatar display processing (S200).

S210: the avatar display process is started (S200). In this example, the user utters the voice command "avatar start" indicating the start of avatar display, thereby determining that the avatar display start instruction is issued.

S211: the position and orientation of the HMD1 at the current moment of wear by the user is detected. The detection method is performed by the GPS sensor 51, the geomagnetic sensor 52, the altitude sensor 56, and the like of the sensor unit 50, as in S113 in the trajectory collection process (S100).

S212: the trajectory information storage table 710 (or the difference trajectory information storage table 720) is referred to, and the point number s of the position coordinate closest to the position of the HMD1 at the current time is searched for. That is, s ≠ n if the current position is the end point of trajectory collection, but may also be s ≠ n if the user has subsequently moved from the end point.

S213: the point number s closest to the current position is set as the initial value of the displayed point number m. This enables display from a locus point close to the current position of the user.

S214: a timer is started. The timer uses a clock built in the main control section 10. The timer is used because the user's trajectory is collected at regular time intervals (unit time) and is displayed accordingly. However, the unit time used for displaying the avatar may be different from the unit time of the trajectory collection process, and may be multiplied by an arbitrary coefficient. For example, if the unit time used for displaying the avatar is set to 1/2 (0.5 seconds in this example), the displayed avatar can be moved at 2 times speed. On the other hand, if the unit time used for displaying the avatar is set to 2 times (2 seconds in this example), the displayed avatar can be moved at 1/2 speed (slow speed). Alternatively, the avatar may be displayed in a moving image (animated) by displaying the avatar sequentially (e.g., every 1/24 seconds) between the track point and the track point in an interpolation manner.

S215: the position and orientation of the displayed virtual avatar are computed. When the track information is stored in the track information storage table 710, the value (X) of the position coordinate 712 corresponding to the point number m is readm,Ym,Zm) Configuring the avatar in a planar position (X)m,Ym) Height (Z)m)。

In the case where the trajectory information is stored in the difference trajectory information storage table 720, the value (difference) of the difference position coordinate 722 at the point number (m +1) is read and subtracted from the last display position (point number ═ m + 1). Namely:

Xm=Xm+1-ΔXm+1,Ym=Ym+1-ΔYm+1,Zm=Zm+1-ΔZm+1

however, for the first time (m ═ s), the position coordinates (X) of the end point described in the start point/end point position coordinate table 730 are usedn,Yn,Zn) And the value of the differential position coordinates 722 up to the point number s.

For convenience, the direction in which the avatar faces is a direction connecting the previous track point (m +1) and the current track point (m).

S216: the position and orientation of HMD1 at the current time are detected. This is the same processing as S211, and can be omitted for the first time (m ═ S) because it was detected in S211.

S217: the visual field calculation function 19 determines the display area of the HMD1, that is, the visual field area of the user, from the position and direction of the HMD1 at the current time, and determines whether or not the avatar placement position calculated in S215 is within the display area. If it is determined that the avatar can be arranged in the display area of HMD1, the process proceeds to S218, and if it is determined that the avatar cannot be arranged, the assist process of S300 is performed. In the help processing in S300, guidance processing is performed such as changing the direction of the field of view of the user or moving the user to a position where the avatar can be seen, and details thereof will be described later with reference to fig. 9B. By the determination processing in S217 and the assist processing in S300, the avatar can be placed in the field of view of the user, and the user can move (track) without losing track of the avatar or without overtaking the avatar.

S218: when it is determined that the avatar can be displayed in the display area of HMD1, the avatar information storage function 16 reads the avatar information and generates an avatar. Then, on the display unit 72 of the HMD1, the avatar is displayed at the display position and orientation of the avatar calculated in S215.

S219: referring to the still image data storage table 810 or the moving image data storage table 820, it is determined whether or not the current point number m displayed is the imaging point k. If it is determined that the track point is a shot point, the process proceeds to S220. If it is determined that the image is not a shooting point, the process proceeds to S222.

S220: the current time is notified as a shooting point. Regarding the notification method of the shooting point, the user may be notified by changing the color of the avatar in display or blinking the avatar.

S221: in response to an instruction from the user, the corresponding shot data is read from the still image data storage table 810 or the moving image data storage table 820 and reproduced. Reproduction of the shot data is performed by a voice command "reproduction". The reproduction of the shot data may be performed in the entire display screen of the HMD, but in this example, the shot data is displayed in a reduced sub-screen, and is displayed for a fixed time period, and then the reproduction of the shot data is terminated.

S222: it is determined whether the end of the avatar display process is instructed by the user. For example, the end instruction of the avatar display processing is determined from the voice command "avatar end". When the end of the avatar display processing is instructed, the process proceeds to S226, and the avatar display processing is ended.

S223: it is judged whether the unit time has elapsed. If it is determined that the unit time has elapsed, the process proceeds to S224. If it is determined that the unit time has not elapsed, the process returns to S222.

S224: the new point number is obtained by subtracting 1 from the point number m. Thus, the first 1 trace point backtracking in time is returned.

S225: it is determined whether the value of the new point number m is less than 0 (i.e., whether the value of the new point number m exceeds the starting point m of the trajectory by 0). If the value of the point number m is not less than 0, the process returns to S214, and the avatar display is continued for the next track point. If the value of the point number m is less than 0, the process proceeds to S226 to end the avatar display process because the display of avatars at all the trajectory points is completed.

Fig. 9B is a flowchart showing the help processing (S300) in fig. 9A. This process is performed when it is determined at S217 whether or not an avatar cannot be placed in the display area. After the process ends, the process returns to S211.

S301: the field-of-view calculation function 20 determines whether or not a virtual avatar can be configured by steering of the HMD 1. If yes, the process proceeds to S302, and if no, the process proceeds to S303.

S302: since this corresponds to a case where the HMD is not oriented in the direction of existence of the avatar, the user is guided to change the direction of the HMD with sound or display. For example, the audio decoding unit 83 of the audio processing unit 80 issues audio guidance "right", "left", and the like. At this time, the avatar may be displayed on the screen and the user may speak. Thereafter, the process returns to S211.

Here, fig. 10A is a diagram showing an example of the steering guidance display in S302. In the display screen 400, an avatar 3 appears, and a message 401 indicating the existence of the avatar is displayed on the right side. When the user faces the right side in response to this, a new direction of the HMD is detected in S216 of fig. 9A. In the judgment of S217, the avatar exists in the new display area, and in S218, the avatar can be displayed.

S303: if the avatar cannot be configured by the steering of HMD1 in S301, the current position is considered to be far from any point where the trajectory was collected. Then, in order to direct the user to the approach point, it is determined whether the map information can be acquired. For example, the process proceeds to S304 when the navigation application is available. If the map information cannot be acquired, the process proceeds to S306.

S304: using the navigation application, a route analysis is performed to the nearest track point S retrieved in S212.

S305: route information up to the track point s (new track point) is added to the track information holding table 710. Returning to S211, the added trajectory point is included as the display object position of the avatar. As a result, the avatar is arranged in the display area of the HMD in S217, and the avatar can be displayed in S218.

S306: in S303, when the map information cannot be acquired, the moving direction to the closest track point S is presented to the user. The prompting method is carried out by sound or AR display. Thereafter, the process returns to S211. When the user moves to the indicated direction, the user can approach the nearest track point s. As a result, the avatar is configured in the display area in S217, and the avatar can be displayed in S218.

Here, fig. 10B is a diagram showing an example of the moving direction guidance display in S306. On the display screen 400, the AR display 402 notifies the user of the orientation that should be moved. The AR display 402 may be displayed by using characters, icons (arrows), or the like as long as the direction of the forward movement is known.

As described above, when it is determined that the avatar cannot be placed in the HMD display area, the user can be guided to the position and direction in which the avatar can be seen by performing the help processing of S300. Thus, the user can move (track) without losing track of the avatar.

Fig. 11A and 11B are diagrams showing examples of display of avatars. This example corresponds to the above-described schematic diagram of the trace collection shown in fig. 5.

On the display screen 400 of fig. 11A, the avatar 3 is displayed so that each trajectory point of the HMD that has performed trajectory collection is retrospectively displayed in time. That is, the display position of the avatar 3 is displayed so as to move in the arrow direction in the order of the track points 504, 503, 502, and 501 (represented by ● symbols). Here, the track point 503 is the image capturing point k, and is thus represented by an "excellent" symbol for distinguishing from other positions. In order to reproduce the direction in which the user 2(HMD) is facing during the trajectory collection process, the body facing direction when the avatar 3 is moved is set to the direction opposite to the direction in which the avatar 3 is traveling (backward), indicating a return along the route that originally arrived.

The display screen 400 of fig. 11B represents the display state of the avatar 3 at the track point 503. The locus point 503 is a turn, so the facing direction of the avatar 3 is changed. In addition, since the track point 503 is a photographed point, the color of the avatar 3 is changed for display. Here, when the user instructs reproduction of the shot data, an image 510' of the subject 510 shot at the shooting point is displayed.

In this example, the track point (● symbol) and the travel direction (arrow) are displayed on the display screen 400 in order to notify the user of the route of the return route, but they may be not displayed.

The track point of the avatar is displayed every unit time (1 second) passes, but the display is not limited to this, and the track point may be displayed at other time intervals. For example, when the display is performed every 1/24 seconds, the avatar can be displayed as a smooth moving image as usual.

Next, a method of operating HMD1 by the user will be described. As described above, in the present embodiment, a voice command is used as an operation by the user.

Fig. 12 is a table showing an example of a voice command used for the operation of the HMD. The items of the voice command table 600 are composed of a category 601 of commands, a voice command 602 issued by the user, names 603 of commands, and processing contents 604.

The classification 601 of voice commands is divided into a trajectory collection correlation and an avatar display correlation. The track collection-related audio command 602 includes "track start", "track start" to start the track collection process, "track end" to end the track collection process, "shooting" to capture a still image, "recording" to start moving image capturing, "shooting start", "recording start" to start moving image capturing, "shooting end" to end moving image capturing, and the like.

The avatar display-related sound command 602 includes "avatar start", "avatar start" to start avatar display processing, "avatar end" to end avatar display processing, "reproduction" to reproduce a still image or a moving image, and "captured data". The voice command mentioned here is an example, and can be set appropriately according to the preference of the user.

On the other hand, the HMD1 can perform a voice response corresponding to a voice command issued by the user, using the voice synthesis function of the voice decoding unit 83 of the voice processing unit 80. As a result, for example, the user can respond with the same voice (repetitive) as the voice command of the user, or can confirm the voice command of the user. For example, an inquiry to the HMD "is XX? "perform XX response with confirmed sound if the user's answer is yes, and cancel XX response with confirmed sound if the user's answer is not yes. Note that the voice commands "yes" and "no" used by the user in the response are not shown in fig. 12, but they are also included in the voice command table 600.

Further, using the voice synthesis function of the voice decoding unit 83, synthesized voices such as the voice "right" for presenting a rotation in the right direction and the voice "left" for presenting a rotation in the left direction can be generated. This can be used in the step of S306 of fig. 9B.

The operation instruction to the HMD1 may be implemented by a method other than a voice command. For example, an operation menu is displayed on the display screen of HMD1, and the user selects the operation menu by a gesture (movement of a hand or a finger). In this case, the user's finger is imaged by the imaging unit 71 of the image processing unit 70, and the movement of the finger can be recognized and used for operation based on the imaged image.

Fig. 13 is a diagram illustrating an operation instruction by the user's finger. On operation screen 410 of HMD1, for example, "photo" menu 411 and "video" menu 412 are displayed as operation menus. The menu display is preferably translucent, but is not limited thereto.

When the user brings the finger 415 close to the display screen 410 from the outside of the HMD1 in order to select, for example, the "video" menu 412, the imaging unit 71 detects the finger image 415 that gradually increases toward the display screen 410. Further, it is recognized that the finger image 415 is directed to the "video" menu 412, and it is determined that "video" is selected and designated. As an operation other than this, only the "start" menu is displayed at the start of trajectory collection and avatar display, and only the "end" menu is displayed at the end. Then, the approach of the user's finger image 415 obtained by the imaging unit 71 is detected, and it is determined that the start or end is instructed.

Alternatively, a touch sensor may be provided in the operation input unit 90 of the HMD1 to detect that the hand or finger of the user touches the HMD 1.

According to embodiment 1, the user who forgets the place and the return route that pass through the drawing while moving can easily understand the return route.

Example 2

In example 1, the positional coordinates of the HMD are stored in three dimensions (X, Y, Z), but in example 2, the case where the positional coordinates are stored in two dimensions (X, Y) will be described. The basic structure of HMD in embodiment 2 is the same as that in embodiment 1, and points different from embodiment 1 will be described.

Since the height direction changes little in a moving state on a normal flat floor, even if data in the height direction is omitted, it is possible to display an avatar based on the plane position coordinates. The trajectory collection processing is performed in the same flowchart (S100) as in example 1 (fig. 6). Then, the collected data is saved in the tables shown in fig. 14A and 14B.

Fig. 14A shows a two-dimensional track information holding table 740 where position information is held by difference values. The items include a point number 741 indicating the number of each track point and differential position coordinates 742 indicating the plane position of the track point. As shown in fig. 7B, the difference position coordinates 742 are not absolute values of the position coordinates, but differences (amounts of change) between adjacent trace points, and the amount of data to be stored is reduced.

In this example, since the unit time is set to 1 second, the point number 1 indicates the difference position coordinates (Δ X1, Δ Y) 1 second after the start of the trajectory collection process1) The dot number k represents the difference position coordinates (Δ Xk, Δ Y) after k secondsk) The dot number n indicates the difference position coordinates (Δ Xn, Δ Y) at the end of the trajectory collection process (n seconds after the start)n). The differential position coordinates are solved in the same manner as in example 1 (fig. 7B).

Fig. 14B shows a two-dimensional start point/end point position coordinate table 750 that stores position information at the start time/end time of track collection. This table is necessary when calculating the display position (absolute position) of the avatar using the two-dimensional difference trajectory information holding table 740 of fig. 14A.

The entry is composed of a start point/end point range 751 for distinguishing a start point or an end point of the trajectory collection, and a position coordinate 752 for representing each position information by an absolute value, and is similar to the above-described fig. 7C. However, in the position coordinates 752, only the position coordinates (X) of the start point0,Y0,Z0) The position coordinate (Z) in the height direction is stored0). This is to cope with a case where the amount of change in height exceeds a predetermined value (threshold) in the acquisition of the trajectory information.

In example 2, only two-dimensional position coordinates are processed, but in the case of a height change, the processing is performed as follows.

In the case of ascending (descending) along the slope, the changes (Δ X, Δ Y) in the plane position coordinates of the HMD are linear, and the change (Δ Z) in the height of the HMD is also linear. When the HMD is raised (lowered) along a step, changes in the plane position coordinates (Δ X, Δ Y) of the HMD correspond to the step width, and changes in the height of the HMD (Δ Z) correspond to the step height difference.

In the case of an escalator, the changes in the position coordinates (Δ X, Δ Y) of the plane of the HMD are linear, as are the changes in the height (Δ Z) of the HMD. The difference from the slope is determined based on a small amount of change in acceleration of the escalator. In the case of an elevator, the plane position coordinates of the HMD are almost unchanged (Δ X is 0 and Δ Y is 0), and only the height of the HMD is changed (Δ Z ≠ 0).

In this way, when moving on a slope, a step, an escalator, an elevator, or the like, characteristic changes (Δ X, Δ Y), and Δ Z) occur in the detected values of the plane position coordinates and the height, respectively, and therefore, when displaying the avatar, path information (the slope, the step, the escalator, the elevator, or the like) is displayed so as to capture the changes. In addition, a virtual avatar having a strong sense of presence can be displayed by displaying the virtual avatar superimposed on an image of a slope, a staircase, an escalator, an elevator, or the like. Further, a large effect can be obtained by connecting to map information such as an underground street map.

When the height change amount exceeds a predetermined value (threshold), the track collection process using the two-dimensional track information storage tables 740 and 750 is terminated as the track a. Then, the process shifts to the trajectory collection process using the three-dimensional trajectory information storage tables 710 to 730 of embodiment 1 as the trajectory B. Thereafter, when the height change amount is within the threshold value, the trajectory collection process using the two-dimensional trajectory information storage tables 740 and 750 is resumed as the trajectory C.

With this method, the amount of data to be stored can be reduced as much as possible. Of course, the two-dimensional trajectory information storage tables 740 and 750 of embodiment 2 can be converted into the three-dimensional trajectory information storage tables 710 to 730 of embodiment 1.

In the above-described embodiments 1 and 2, the GPS sensor 51 of the sensor unit 50 is assumed to be relatively easy to acquire GPS information (position information) outdoors, but it may be difficult to acquire GPS information indoors or underground. When the GPS information cannot be acquired, the information is supplemented with information from the acceleration sensor 54, the gyro sensor 55, and the like of the sensor unit 50.

When position information cannot be acquired from GPS information or the like from the start point to the end point, X0 and Y0 are stored as 0 for the position coordinates of the start point. In this case, the position coordinates of the height are also stored as if Z0 were 0. Further, the position coordinates of the end point are calculated by adding up all the differential position coordinates from the start point to the end point.

Example 3

In embodiment 1, when the avatar is displayed, the avatar is made to travel backward by making the facing direction of the avatar opposite to the traveling direction. In contrast, in embodiment 3, the virtual avatar is displayed with its facing direction oriented in the traveling direction.

Fig. 15 is a schematic diagram of an avatar display in embodiment 3. On the display screen 400, the avatar 3 is displayed in a posture (forward) toward the traveling direction. That is, the facing direction of the avatar 3 is set to a vector direction from the current track point 503 to the next track point 502. Thereby, it is possible to eliminate the unnatural feeling of the user (the traveling direction of the avatar does not coincide with the facing direction of the avatar) due to the facing direction of the avatar as shown in embodiment 1 (fig. 11B).

In addition, setting the facing direction of the avatar to be forward (embodiment 3) or to be backward also depends on the preference of the user. Therefore, it is more preferable that the user can select and set which display mode the facing direction of the avatar is set to.

Example 4

In embodiment 4, a building or the like is present in the field of view of the HMD, and the avatar is not displayed when the avatar is located in its shadow.

Fig. 16 is a schematic diagram of an avatar display in embodiment 4. Here, the avatar 3 is made to travel backward. Then, a case is shown where the building 521 and the building 522 exist in the vicinity of the movement path, and the avatar 3 is occluded by the building 521. In this case, the avatar 3 is controlled not to display a part blocked by the building 521. Whether or not the avatar 3 is hidden can be determined by comparing the distances to the buildings 521 and 522 and the avatar 3, and the distances to the buildings 521 and 522 can be measured by the distance sensor 53.

Thus, when the avatar can be superimposed and displayed in the real space viewed through the HMD, a realistic display without unnatural feeling can be realized. In addition, when the entire avatar is blocked and not displayed at all, there is a risk that the avatar cannot be tracked thereafter. Accordingly, a structure that can be tracked by displaying a portion of the occluded avatar in a semi-transparent manner may be used.

The images of the building 521 and the building 522 can be displayed using, for example, a street view (Google corporation) or the like, which is an internet service providing a landscape along the road with a panoramic photograph. In addition, names of buildings and the like attached to the buildings and the like may be used according to the position and direction of HMD 1.

Example 5

In embodiment 5, a configuration is adopted in which an instruction for shooting is automatically shot, not by a user.

The imaging unit 71 of the HMD performs imaging from the starting point, cyclically captures images as needed, and temporarily stores images for a predetermined time (for example, 2 times the length of a unit time between trace points used for the trace collection processing) from the current time. Thus, the automatically captured video is a moving image or a still image that is traced back for a certain time (for example, 2 seconds) from the current time. In the case of a still image, only 1 or 2 still images can be stored temporarily between a track point and the next track point to sufficiently function. Then, the image captured at a specific place on the moving route, for example, a place where the moving direction is changed, is retained.

The specific operation of the automatic shooting will be described with reference to the trajectory collection diagram of fig. 5. In fig. 5, at the time of the track point 504, the images (2 times the length per unit time) from the track point 502 to the track point 504 are temporarily stored. However, the moving direction (moving vector between dots) changes by 90 degrees at the locus point 503. Then, the images taken from the track point 502 to the track point 503 are automatically saved as images before the direction of the movement vector is largely changed. The stored image data of the track points 503 may be a moving image (a moving image frame captured up to the track points 503) or a still image (captured data of the track points 503).

According to the imaging method of embodiment 5, when the user changes the traveling direction, since the image of the traveling direction before the change of the traveling direction is stored, it is possible to display the image most suitable for evoking the memory of the user.

Example 6

In embodiment 6, information such as the posture of the user is also collected in the trajectory collection process, and the avatar is reflected when displayed.

For example, the distance from the floor or floor surface to the HMD1 (i.e., the height of the head of the user) is detected by the distance sensor 53 of the sensor unit 50 and collected for each locus. By comparing the detection value with the height (known) of the user, the posture (standing, sitting, squatting, lying, etc.) of the user wearing the HMD1 can be estimated. The posture information is reflected on the displayed avatar, and the avatar standing up or the avatar sitting down is displayed.

Further, the geomagnetic sensor 52 of the sensor unit 50 can detect the facing direction of the face of the user wearing the HMD 1. The information of the facing direction of the face is reflected on the displayed virtual avatar, and the facing direction of the face of the virtual avatar is changed for each track point. The facing direction of the body of the avatar may be changed by estimating the facing direction of the face, but may be matched with vector information (traveling direction) between the track points. Through these displays, a more realistic avatar can be displayed.

Example 7

In embodiment 7, the time when each track point passes is stored in the track collection processing, and the passing time is displayed on the display screen when the avatar is displayed.

Fig. 17A shows an example of a track information storage table 710' in embodiment 7. The track information storage table 710 according to embodiment 1 (fig. 7A) additionally includes elapsed time 713 at which each track point is passed.

Fig. 17B is a diagram showing an example of display of an avatar. On the display screen 400, the elapsed time 420 is displayed together with the display of the avatar 3. Regarding this time, in the trajectory information storage table 710' of fig. 17A, the elapsed time 713 corresponding to the point number 711 of the avatar currently being displayed may be referred to.

Thus, according to embodiment 7, it is displayed when the user has passed the currently displayed track point, and therefore, it is information effective for evoking the user's memory.

Example 8

In embodiment 8, a case where a smartphone is used as a portable video display device instead of an HMD will be described. The smartphone has a hardware structure and a software structure substantially the same as those of the HMD, and can realize the same function.

Fig. 18 shows the appearance of the smartphone 9, where (a) is the front side on which the image is displayed, and (b) is the rear side having the camera.

As shown in (b), the camera 901 is provided on the back side to capture an external scene. The smartphone 9 has a sensor for detecting a three-dimensional position and the direction of the camera 901, and can collect trajectory information in the same manner as the HMD.

As shown in (a), the front side has a display screen 902 in which a touch panel is embedded, and an image captured by directing a camera 901 to an outside scene is displayed. Then, on the display screen 902, the avatar 3 can be superimposed and displayed in the same manner as in embodiment 1 (fig. 11A) based on the above-described collected trajectory information in accordance with the displayed image of the external scene.

In the case of the smartphone 9, compared to the HMD, the camera 901 needs to be imaged toward the subject every time the imaging process is performed for the trajectory information collection, and the imaging operation becomes complicated when the number of imaging points increases. Further, automatic shooting has a drawback that it is not realistic for the user to keep the shooting direction of the camera 901 fixed. However, since the track information can be collected by the existing smartphone, convenience is improved. However, since it is difficult for the user to move while shooting, it is more realistic to perform the shooting only at a main point.

Here, a smartphone is given as an example of a portable video display device, but the operation of the present embodiment can be realized by a hardware configuration and a software configuration that are equivalent and similar. For example, the present invention can also be applied to a notebook PC, a tablet PC, and the like.

Example 9

In embodiment 9, a description will be given of an image display system used by a plurality of users wearing an HMD in cooperation.

Fig. 19 is a diagram showing a configuration of an image display system in which a plurality of HMDs are connected. Here, the 2 users 2a and 2b are shown, and each wear HMD1a and HMD1 b. The same applies to cases other than 2.

HMD1a worn by user 2a is connected to external server 6 via wireless router 4a and network 5. In addition, HMD1b worn by user 2b is connected to the same server 6 via wireless router 4b and network 5. When HMD1a (user 2a) is close to HMD1b (user 2b), the wireless routers 4a and 4b can be shared.

In this case, the trajectories of the users 2a and 2b are collected by the HMDs 1a and 1 b. Then, the trajectory information of the user 2a wearing the HMD1a and the trajectory information of the user 2b wearing the HMD1b are stored in the common server 6. Then, the trajectory information of 2 persons is read from the server 6, and each HMD1a, 1b displays 2 avatars, which are 2 virtual images of 2 persons. Thus, the respective trajectories can be mutually referenced by avatars.

Fig. 20 is a diagram showing an example of display of a plurality of avatars. For example, on the display screen 400a of the HMD1a, in addition to the avatar 3a of the user 2a wearing the HMD1a, the avatar 3b of the user 2b wearing the HMD1b is also displayed. Then, the trajectories 520a and 520b of the 2-person actions are displayed by the respective trajectory points (● symbol and a-solidup symbol), and the positional relationship of the 2 persons can be displayed retrospectively over time. In this example, it is known that 2 people meet at the intersection of the track points 521. The same is displayed on display screen 400b of HMD1 b.

Thus, according to embodiment 9, the trajectories of the plurality of users can be mutually confirmed. As an effect added to this result, when the other user leaves the field of view during the trajectory collection, the position of the other user can be immediately known.

As a practical aspect of the present embodiment, a configuration may be adopted in which communication is performed between portable information terminals using short-range wireless communication such as Bluetooth (registered trademark), without using the server 6 connected via the wireless router 4 and the network 5. For example, when a child carries a smartphone while moving with the parent and the child, the child can immediately look for the smartphone or HMD of the parent.

While the embodiments of the present invention have been described above with reference to examples 1 to 9, the structure for realizing the technique of the present invention is not limited to the above examples, and various modifications are conceivable. For example, a part of the structure of one embodiment may be replaced with the structure of another embodiment, and the structure of another embodiment may be added to the structure of one embodiment. All of which fall within the scope of the present invention. In addition, numerical values, messages, and the like appearing in the text and the drawings are only examples, and the effects of the present invention are not impaired even if they are used differently.

The functions of the present invention and the like described above may be implemented in hardware by, for example, designing the functions in an integrated circuit. The present invention can also be realized by software by interpreting and executing a program for realizing each function by a microprocessor unit or the like. Both hardware and software may also be used. The software may be stored in advance in various programs 41 of the HMD1 when the product is shipped from the factory. Or may be acquired from various server apparatuses on the internet or the like after the product is shipped. The software may be provided on a memory card, an optical disk, or the like.

The control lines and information lines shown in the drawings are for illustrative purposes and do not necessarily show all the control lines and information lines in the product. In practice it can also be considered that almost all structures are interconnected.

Description of the reference numerals

1 … HMD (image display device), 2 … user, 3 … avatar, 4 … wireless router, 5 … network, 6 … server, 9 … smartphone (image display device), 10 … main control unit, 40 … storage unit, 50 … sensor unit, 51 … GPS sensor, 52 … geomagnetic sensor, 53 … distance sensor, 56 … height sensor, 60 … communication processing unit, 70 … image processing unit, 71 … camera unit, 72 … display unit, 80 … sound processing unit, 90 … operation input unit.

38页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:信息处理装置、信息处理方法以及程序

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类