In-vehicle performance device, in-vehicle performance system, in-vehicle performance method, program, and instruction measurement device

文档序号:1509525 发布日期:2020-02-07 浏览:31次 中文

阅读说明:本技术 车内演出装置、车内演出系统、车内演出方法、程序及指示计测装置 (In-vehicle performance device, in-vehicle performance system, in-vehicle performance method, program, and instruction measurement device ) 是由 岩崎瞬 浅海寿夫 石坂贤太郎 近藤泉 小池谕 真锅伦久 伊藤洋 林佑树 于 2018-06-15 设计创作,主要内容包括:车内演出装置演出车辆上的乘客进行的游戏,其中,具备:动作检测部,检测与车辆的驾驶无关的所述乘客的动作;显示部,显示所述乘客能够目视确认的图像;及显示控制部,基于所述动作检测部检测到的所述乘客的动作,将与所述乘客的动作相应的响应图像显示于所述显示部,并且输出基于规定的规则的游戏的结果。(An in-vehicle performance device performs a game to be played by a passenger in a vehicle, the in-vehicle performance device including: an operation detection unit that detects an operation of the passenger that is not related to driving of the vehicle; a display unit that displays an image that can be visually confirmed by the passenger; and a display control unit that displays a response image corresponding to the movement of the passenger on the display unit based on the movement of the passenger detected by the movement detection unit, and outputs a result of the game based on a predetermined rule.)

1. An in-vehicle performance apparatus that performs a game played by a passenger on a vehicle,

the in-vehicle performance device is provided with:

an operation detection unit that detects an operation of the passenger that is not related to driving of the vehicle;

a display unit that displays an image that can be visually confirmed by the passenger; and

and a display control unit that displays a response image corresponding to the movement of the passenger on the display unit based on the movement of the passenger detected by the movement detection unit, and outputs a result of the game based on a predetermined rule.

2. The in-vehicle show apparatus of claim 1,

the display control unit displays the response image on the display unit so as to overlap with an object in the surroundings of the vehicle.

3. The in-vehicle show apparatus of claim 2,

the action of the passenger is an action of an object indicative of the surroundings,

the motion detection unit detects a direction indicated by the passenger,

the display control unit causes the display unit to display the response image so as to overlap with a specific object in the indicated direction, based on the direction detected by the motion detection unit.

4. The in-vehicle rendering apparatus of claim 2 or 3,

the in-vehicle rendering device further includes a score calculating unit that calculates, as an effective score, the number of times the motion detecting unit detects the motion indicating the specific object,

the display control unit causes the display unit to display an image representing the score calculated by the score calculating unit at a predetermined timing.

5. The in-vehicle rendering apparatus of claim 4,

the specific object is the specific road sign up to the destination,

the in-vehicle performance apparatus further includes:

a destination information acquisition unit that acquires destination information indicating a destination;

an imaging unit that images the surrounding environment and generates a surrounding environment image; and

an extracting unit that extracts the specific road sign up to the destination indicated by the destination information in the surrounding image,

the score calculating unit calculates, as an effective score, the number of times the operation of the passenger who has detected the operation of the operation detecting unit and instructed the position corresponding to the road sign extracted by the extracting unit.

6. The in-vehicle rendering apparatus of claim 5,

the specific object is another vehicle that is traveling in the vicinity of the vehicle,

the extraction section extracts the other vehicle in the surrounding image,

the score calculating unit calculates, as an effective score, the number of times the motion detecting unit detects the motion of the passenger indicating the position corresponding to the another vehicle extracted by the extracting unit.

7. The in-vehicle rendering apparatus of claim 5 or 6,

the specific object is a signboard of a shop existing around the vehicle,

the extracting section extracts the signboard in the surrounding image,

the score calculating unit calculates, as an effective score, the number of times the operation of the passenger, who has detected the operation of the operation detecting unit and has instructed the position corresponding to the signboard extracted by the extracting unit.

8. The in-vehicle rendering apparatus of any one of claims 4 to 7,

the display control unit causes the display unit to display an image showing the total of the scores calculated by the score calculation unit.

9. An in-vehicle performance system comprising a plurality of in-vehicle performance apparatuses according to any one of claims 4 to 8,

the in-vehicle performance device is provided with:

a transmitting unit that transmits information indicating the score calculated by the score calculating unit to another in-vehicle performance apparatus; and

a receiving unit that receives score information indicating the score calculated by the other in-vehicle performance device,

the display control unit causes the display unit to display an image showing the score calculated by the score calculation unit and an image showing the score of the score information received by the reception unit.

10. The in-vehicle rendering system of claim 9,

the display control unit compares the image showing the score of the score information received from the in-vehicle performance device mounted on a vehicle belonging to the same team as the present device with the image showing the score of the score information received from the in-vehicle performance device mounted on a vehicle belonging to another team, and displays the image on the display unit.

11. The in-vehicle show system of claim 9 or 10,

the display control unit displays an image for urging movement based on information indicating the behavior of the vehicle on which the device is mounted and information indicating the behavior of the vehicle on which the other in-vehicle performance device is mounted.

12. An in-vehicle performance method, wherein,

the in-vehicle rendering method causes a computer, which is provided with a display unit and renders a game played by a passenger in a vehicle, to perform the following processing:

detecting an action of the passenger unrelated to driving of the vehicle;

displaying an image that can be visually confirmed by the passenger; and

based on the detected action of the passenger, a response image corresponding to the action of the passenger is displayed, and the result of the game based on a prescribed rule is output.

13. A process in which, in the presence of a catalyst,

the program causes a computer provided with a display unit and playing a game played by a passenger in a vehicle to execute:

detecting an action of the passenger unrelated to driving of the vehicle;

displaying an image that can be visually confirmed by the passenger; and

based on the detected action of the passenger, a response image corresponding to the action of the passenger is displayed, and the result of the game based on a prescribed rule is output.

14. An indication measuring device, wherein,

the instruction measurement device includes:

an operation detection unit that detects an operation of a passenger of the vehicle;

a sight line detection unit that detects a viewpoint position of the passenger;

a coordinate acquisition unit that acquires a three-dimensional point group in an actual space in a direction indicated by the passenger's instruction operation, based on the passenger's instruction operation detected by the operation detection unit and the passenger's viewpoint position detected by the sight line detection unit;

an object information acquiring unit that acquires, from an interface that supplies information indicating an object existing in an actual space indicated by a three-dimensional point group, information indicating the object associated with the three-dimensional point group acquired by the coordinate acquiring unit;

a service provider specifying unit that specifies a service provider associated with the object indicated by the information acquired by the object information acquiring unit, based on service provider information indicating a service provider associated with the object; and

and a history information generating unit that generates history information in which the service provider specified by the service provider specifying unit and the object indicated by the information acquired by the object information acquiring unit are associated with each other.

15. The indication measuring device according to claim 14,

in the history information, the time when the instruction operation is performed and the information indicating the passenger are further associated with each other.

16. The indication measuring device according to claim 14 or 15,

the instruction measurement device further includes:

an imaging unit that images a surrounding environment and generates a surrounding environment image; and

an attribute determining unit that determines an attribute of the surrounding environment based on the surrounding environment image captured by the imaging unit,

the coordinate acquisition unit acquires a three-dimensional point group in an actual space of an instruction direction of the instruction motion of the passenger based on the attribute of the surrounding environment specified by the attribute specification unit.

17. The indication measuring device according to any one of claims 14 to 16,

the instruction measurement device further includes a notification unit configured to notify the service provider indicated by the history information that the instruction operation has been performed.

18. The indication measuring device according to claim 17,

the notification unit notifies the passenger of the history information.

19. The indication measuring device according to any one of claims 14 to 18,

the instruction measurement device further includes an imaging unit that images the surrounding environment and generates a surrounding environment image,

the coordinate acquisition unit acquires information indicating a three-dimensional point group based on the surrounding environment image captured by the imaging unit from an interface that supplies the three-dimensional point group based on the surrounding environment image.

Technical Field

The invention relates to an in-vehicle performance device, an in-vehicle performance system, an in-vehicle performance method, a program, and an instruction measurement device.

The present application claims priority to the present application based on Japanese patent application No. 2017-119025, filed on 6/16/2017, the contents of which are incorporated herein by reference.

Background

Conventionally, a technique of calculating a score of a driving technique of a driver of a vehicle is known (for example, patent document 1).

Prior art documents

Patent document

Patent document 1: japanese patent laid-open publication No. 2015-90676

Disclosure of Invention

Problems to be solved by the invention

The prior art does not provide entertainment to the occupants of the vehicle. The present invention has been made in view of the above problems, and an object thereof is to provide entertainment to a passenger of a vehicle in moving the vehicle.

Means for solving the problems

The in-vehicle performance apparatus, the in-vehicle performance system, the in-vehicle performance method, and the program according to the present invention adopt the following configurations.

(1) One aspect of the present invention is an in-vehicle rendering device that renders a game played by a passenger in a vehicle, the in-vehicle rendering device including: an operation detection unit 30 that detects an operation of the passenger that is not related to driving of the vehicle; a display unit 60 for displaying an image visually recognizable by the passenger; and a display control unit 18 that displays a response image corresponding to the movement of the passenger on the display unit based on the movement of the passenger detected by the movement detection unit, and outputs a result of the game based on a predetermined rule.

(2) In the in-vehicle rendering device according to the aspect (1), the display control unit may display the response image on the display unit so as to overlap with an object of the surrounding environment of the vehicle.

(3) In the in-vehicle rendering device according to the aspect of (2), the motion of the passenger may be a motion of an object that instructs the surrounding environment, the motion detection unit may detect a direction instructed by the passenger, and the display control unit may cause the display unit to display the response image so as to overlap with a specific object in the instructed direction, based on the direction detected by the motion detection unit.

(4) The in-vehicle rendering device according to the aspect (2) or (3) may further include a score calculating unit 17, wherein the score calculating unit 17 calculates the number of times the motion detecting unit detects the motion indicating the specific object as the effective score, and the display control unit may display the image indicating the score calculated by the score calculating unit on the display unit at a predetermined timing.

(5) The in-vehicle rendering device according to the aspect (4) may be configured such that the specific object is the specific road sign up to a destination, and the in-vehicle rendering device further includes: a destination information acquisition unit 14 that acquires destination information indicating a destination; an imaging unit 40 that images the surrounding environment and generates a surrounding environment image; and an extraction unit 16 that extracts the specific road sign up to the destination indicated by the destination information in the surrounding image, wherein the score calculation unit calculates, as a validity score, the number of times the operation detection unit detects the operation of the passenger who indicates the position corresponding to the road sign extracted by the extraction unit.

(6) In the in-vehicle rendering device according to the aspect of (5), the specific object may be another vehicle that travels in the vicinity of the vehicle, the extraction unit may extract the another vehicle in the surrounding environment image, and the score calculation unit may calculate, as the validity score, the number of times the motion detection unit detects the motion of the passenger who indicates the position corresponding to the another vehicle extracted by the extraction unit.

(7) In the in-vehicle rendering device according to the aspect of (5) or (6), the specific object may be a signboard of a shop existing around the vehicle, the extracting unit may extract the signboard in the surrounding environment image, and the score calculating unit may calculate, as a valid score, the number of times the motion detecting unit detects the motion of the passenger indicating the position corresponding to the signboard extracted by the extracting unit.

(8) In the in-vehicle rendering device according to any one of (4) to (7), the display control unit may display an image showing a total of the scores calculated by the score calculation unit on the display unit.

(9) An in-vehicle performance system comprising a plurality of in-vehicle performance apparatuses according to any one of (4) to (8), the in-vehicle performance apparatus comprising: a transmission unit 80 that transmits information indicating the score calculated by the score calculation unit to another in-vehicle performance apparatus; and a receiving unit 80 that receives score information indicating the score calculated by the other in-vehicle performance device, wherein the display control unit causes the display unit to display an image indicating the score calculated by the score calculating unit and an image indicating the score of the score information received by the receiving unit.

(10) In the in-vehicle performance system according to the aspect of (9), the display control unit may compare the image showing the score of the score information received from the in-vehicle performance device mounted on a vehicle belonging to the same team as the own device with the image showing the score of the score information received from the in-vehicle performance device mounted on a vehicle belonging to another team and display the images on the display unit.

(11) In the in-vehicle performance system according to the aspect (9) or (10), the display control unit may display the image for urging movement based on information indicating the behavior of the vehicle on which the in-vehicle performance system is mounted and information indicating the behavior of the vehicle on which the other in-vehicle performance system is mounted.

(12) An in-vehicle rendering method for causing a computer, which is provided with a display unit and renders a game played by a passenger in a vehicle, to perform: detecting an action of the passenger unrelated to driving of the vehicle; displaying an image that can be visually confirmed by the passenger; and displaying a response image corresponding to the action of the passenger based on the detected action of the passenger, and outputting a result of the game based on a prescribed rule.

(13) A program for causing a computer, which is provided with a display unit and which shows a game played by a passenger on a vehicle, to perform: detecting an action of the passenger unrelated to driving of the vehicle; displaying an image that can be visually confirmed by the passenger; and displaying a response image corresponding to the action of the passenger based on the detected action of the passenger, and outputting a result of the game based on a prescribed rule.

(14) One aspect of the present invention is an instruction measurement device including: an operation detection unit that detects an operation of a passenger of the vehicle; a sight line detection unit that detects a viewpoint position of the passenger; a coordinate acquisition unit that acquires a three-dimensional point group in an actual space in a direction indicated by the passenger's instruction operation, based on the passenger's instruction operation detected by the operation detection unit and the passenger's viewpoint position detected by the sight line detection unit; an object information acquiring unit that acquires, from an interface that supplies information indicating an object existing in an actual space indicated by a three-dimensional point group, information indicating the object associated with the three-dimensional point group acquired by the coordinate acquiring unit; a service provider specifying unit that specifies a service provider associated with the object indicated by the information acquired by the object information acquiring unit, based on service provider information indicating a service provider associated with the object; and a history information generating unit that generates history information in which the service provider specified by the service provider specifying unit and the object indicated by the information acquired by the object information acquiring unit are associated with each other.

(15) According to the instruction measuring device of the aspect (14), in the history information, the time when the instruction operation is performed and the information indicating the passenger are further associated with each other.

(16) The instruction measurement device according to the aspect (14) or (15), further comprising: an imaging unit that images a surrounding environment and generates a surrounding environment image; and an attribute specifying unit that specifies an attribute of the surrounding environment based on the surrounding environment image captured by the imaging unit, wherein the coordinate acquiring unit acquires a three-dimensional point group in an actual space of a direction indicated by the indication motion of the passenger based on the attribute of the surrounding environment specified by the attribute specifying unit.

(17) The instruction measurement device according to any one of the aspects (14) to (16), further comprising a notification unit configured to notify the service provider indicated by the history information that the instruction operation has been performed.

(18) According to the instruction measurement device of (17), the notification unit notifies the history information to the passenger.

(19) The instruction measurement device according to any one of (14) to (18), further comprising an imaging unit that images a surrounding environment and generates a surrounding environment image, wherein the coordinate acquisition unit acquires information indicating a three-dimensional point group based on the surrounding environment image imaged by the imaging unit, from an interface that supplies the three-dimensional point group based on the surrounding environment image.

Effects of the invention

According to the aspects (1) to (19) described above, it is possible to provide entertainment to the vehicle occupant in the movement of the vehicle.

Drawings

Fig. 1 is a diagram showing an outline of an in-vehicle performance apparatus according to a first embodiment.

Fig. 2 is a functional configuration diagram showing an example of the configuration of the in-vehicle rendering apparatus according to the first embodiment.

Fig. 3 is a diagram showing an example of a response image indicating that a shooting motion is detected.

Fig. 4 is a diagram showing an example of a response image in which a shot was shot.

Fig. 5 is a diagram showing an example of a response image indicating the score of the score information.

Fig. 6 is a flowchart showing an example of the operation of the in-vehicle rendering device according to the first embodiment.

Fig. 7 is a functional configuration diagram showing an example of the configuration of the in-vehicle rendering apparatus according to the second embodiment.

Fig. 8 is a diagram showing an example of display of a response image displayed by another in-vehicle performance apparatus according to the second embodiment.

Fig. 9 is a diagram showing an example of the result of the shooting game based on the rule of the territorial area of the travel route.

Fig. 10 is a diagram showing an example of the win or loss of the shooting game under the rule of the territory area.

Fig. 11 is a functional configuration diagram showing an example of the configuration of the in-vehicle rendering apparatus according to the third embodiment.

Fig. 12 is a diagram showing an example of the contents of the history information.

Fig. 13 is a diagram showing an example of the contents of the service provider information.

Detailed Description

[ first embodiment ]

Hereinafter, a first embodiment of the present invention will be described with reference to the drawings. Fig. 1 is a diagram showing an outline of an in-vehicle performance apparatus 1 according to a first embodiment. The in-vehicle entertainment apparatus 1 is an apparatus for providing entertainment to a passenger (hereinafter, referred to as a passenger PS) of a vehicle V that moves by the vehicle (hereinafter, referred to as a vehicle V). The in-vehicle rendering device 1 renders a shooting game in which an object existing in the surrounding environment of the vehicle V is a shooting target, for example, while the vehicle V is moving. In an example of the present embodiment, the vehicle V travels by automatic driving. The in-vehicle entertainment apparatus 1 is disposed in the vehicle V. The vehicle V is provided with a sound detection unit 20, a motion detection unit 30, an imaging unit 40, an input unit 50, and a display 60.

The sound detection unit 20 detects a sound emitted from the passenger PS. The sound detection unit 20 is, for example, a microphone. The operation detection unit 30 detects the operation of the passenger PS. The motion detector 30 is, for example, a motion sensor. The imaging unit 40 images the environment around the vehicle V. The input unit 50 includes an input device and receives an input operation of the passenger PS. The input device includes a keyboard and other devices for inputting text information, a mouse, a touch panel and other pointing devices, buttons, dials, a joystick, a touch sensor, a touch pad, and the like. The display 60 displays various information based on the control of the in-vehicle performance apparatus 1. In an example of the present embodiment, the display 60 is a transmissive display, and is disposed in contact with a front window of the vehicle V. The passenger PS can visually confirm the surroundings of the vehicle V and the image displayed by the display 60 from the front window. The vehicle V may not include a front window. In this case, the display 60 displays the surrounding image of the vehicle V captured by the imaging unit 40 and the response image in a superimposed manner. The Display 60 may be a Head-Up Display (HUD).

The in-vehicle performance apparatus 1 displays a response image corresponding to the movement of the passenger PS on the display 60 based on the detection results detected by the sound detection unit 20 and the movement detection unit 30 and the information input to the input unit 50. The in-vehicle rendering device 1 presents a response image based on, for example, an action of the passenger PS shooting an object present in the surrounding environment of the vehicle V (hereinafter, referred to as a shooting action), and calculates a score associated with the shooting action. Hereinafter, a specific configuration of the in-vehicle rendering device 1 will be described.

[ functional Structure of in-vehicle Performance apparatus ]

Fig. 2 is a functional configuration diagram showing an example of the configuration of the in-vehicle rendering device 1 according to the first embodiment.

The in-vehicle entertainment apparatus 1 includes a control unit 10, a sound detection unit 20, a motion detection unit 30, an imaging unit 40, an input unit 50, a display 60, and a storage unit 500. The storage unit 500 stores information indicating a response image (hereinafter referred to as response image information 500-1).

The control unit 10 is realized by a processor such as a cpu (central Processing unit) executing a program stored in the storage unit 500, and includes, as its functional units, the sound information acquisition unit 11, the operation information acquisition unit 12, the image information acquisition unit 13, the destination information acquisition unit 14, the game management unit 15, the extraction unit 16, the score calculation unit 17, and the display control unit 18. These functional units may be realized by hardware such as lsi (large Scale integration), asic (application Specific integrated circuit), FPGA (Field-Programmable Gate Array), or the like, or may be realized by cooperation between software and hardware.

The sound information acquisition unit 11 acquires sound information indicating the sound of the passenger PS detected by the sound detection unit 20. The operation information acquisition unit 12 acquires operation information indicating the operation of the passenger PS detected by the operation detection unit 30. In an example of the present embodiment, the passenger PS generates a sound imitating a shooting sound, and performs an operation of pointing an object (a shooting target) existing in the surrounding environment of the vehicle V with a finger as a shooting operation. Therefore, a sound imitating a shooting sound is included in the sound information. "Bang" shown in fig. 1 indicates the contents of the sound of the passenger PS imitating the firing sound. The motion information includes an instruction motion for instructing the shooting target. The audio information acquisition unit 11 outputs the acquired audio information to the display control unit 18 and the score calculation unit 17. The operation information acquiring unit 12 outputs the acquired operation information to the display control unit 18 and the score calculating unit 17.

The imaging unit 40 images the surrounding environment in which the vehicle V travels, and the image information acquisition unit 13 acquires information (hereinafter, referred to as image information) indicating the generated surrounding environment image. The image information acquiring unit 13 outputs the acquired image information to the extracting unit 16. The destination information acquiring unit 14 acquires information indicating a destination indicated by an operation input to the input unit 50 (hereinafter referred to as destination information). The vehicle V travels by autonomous driving toward the destination indicated by the destination information acquired by the destination information acquiring unit 14, for example. The destination information acquisition unit 14 outputs the destination information to the display control unit 18.

The game management unit 15 manages a shooting game performed by the in-vehicle performance apparatus 1. The game management unit 15 manages, for example, the start and end of a shooting game. For example, when the operation input to the input unit 50 indicates the start of the shooting game, the game management unit 15 executes a process of starting the shooting game. When the operation input to the input unit 50 indicates the end of the shooting game, the game management unit 15 executes a process of ending the shooting game. The process of starting the shooting game is a process (hereinafter referred to as an enable process) of allowing the extraction unit 16, the score calculation unit 17, and the display control unit 18 to execute various processes. The process of ending the shooting game is a process (hereinafter, referred to as a disabling process) in which execution of various processes is not permitted for the extraction unit 16, the score calculation unit 17, and the display control unit 18. The extraction unit 16, the score calculation unit 17, and the display control unit 18 execute various processes when the game management unit 15 executes the enable process, and do not execute various processes when the game management unit 15 executes the disable process.

In the above description, the game management unit 15 has been described as acquiring information indicating the start and end of the shooting game based on the operation input to the input unit 50, but the present invention is not limited to this. The start and end of the shooting game may also be indicated by the sound of the passenger PS. In this case, the game management unit 15 may manage the start and end of the shooting game based on the detection result of the sound detection unit 20. The game management unit 15 may be configured to determine the thermal intensity of the vehicle interior environment of the vehicle V based on the detection results of the sound detection unit 20 and the motion detection unit 30. The game management unit 15 may automatically execute the enabling process when the degree of warmth of the in-vehicle environment of the vehicle V is low. In this case, the in-vehicle rendering device 1 can render the shooting game when the interior is not hot, and can make the interior hot. Further, the game management unit 15 may automatically execute the disabling process when the degree of warmth of the in-vehicle environment of the vehicle V is low. In this case, the in-vehicle rendering device 1 can end the shooting game when the in-vehicle rendering is not in progress through the shooting game of rendering.

The extraction unit 16 receives input of image information from the image information acquisition unit 13. The extraction unit 16 performs image recognition processing on the surrounding image indicated by the image information, and extracts the object to be shot by the passenger PS. The extraction unit 16 extracts, as the object to be shot by the passenger PS, for example, a preceding vehicle, a following vehicle, a parallel traveling vehicle, an opposing vehicle, a signboard of a shop existing around the vehicle V, or a road sign, which travel near the vehicle V. The extraction unit 16 matches the position of the actual object present around the vehicle V with the position of the extracted object (shooting target) based on the surrounding environment image, and outputs information indicating the position of the shooting target to the score calculation unit 17.

Specifically, the extraction unit 16 performs image recognition processing on the surrounding image, and recognizes the positions of various objects captured in the surrounding image on the surrounding image. The extraction unit 16 matches the position on the front window when the surrounding image is presented in full size on the front window with the position of the extracted object (shooting target) on the surrounding image. The extraction unit 16 outputs the position of the front window to the score calculation unit 17 and the display control unit 18 as virtual position information indicating a virtual position of the shooting target.

The score calculation unit 17 calculates a score for the movement of the passenger PS based on the voice information input from the voice information acquisition unit 11, the movement information acquired by the movement information acquisition unit 12, and the virtual position information input from the extraction unit 16. For example, when the sound information indicates a sound imitating a shooting sound and the direction of the instruction action indicated by the action information is the direction of the position of the shooting target indicated by the virtual position information, the score calculating unit 17 calculates a high score for the action of the passenger PS. Further, when the sound information indicates a sound imitating a shooting sound and the direction of the instruction action indicated by the action information is not the direction of the position of the shooting target indicated by the virtual position information, the score calculating unit 17 calculates a low score for the action of the passenger PS. The score calculating unit 17 outputs score information indicating the calculated score to the display control unit 18.

The display control unit 18 displays the response image on the display 60 based on the sound information input from the sound information acquisition unit 11, the motion information input from the motion information acquisition unit 12, the destination information input from the destination information acquisition unit 14, the virtual position information input from the extraction unit 16, and the score information input from the score calculation unit 17. A specific example of the response image displayed on the display 60 by the display control unit 18 will be described below.

[ example showing response image in which shooting motion is detected ]

Fig. 3 is a diagram showing an example of the response image GP1A indicating that the shooting motion is detected. When the sound information indicates a sound imitating a shooting sound and the operation information indicates an instruction operation, the display control unit 18 displays a response image (the response image GP1A shown in the figure) indicating that the shooting operation is detected on the display 60. As shown in fig. 3, the response image GP1A is, for example, an image indicating that the shooting object was hit (hit). The display control unit 18 presents the response image GP1A to indicate the detection of the shooting action to the passenger PS.

[ example showing response image shot ]

Fig. 4 is a diagram showing an example of the response image GP1B in which the shot was taken. The display controller 18 displays the response image GP1B (the response images GP1B-1 to GP1B-4 shown in the figure) indicating that the shot was taken on the display 60. As shown in fig. 4, the response image GP1B is an image in which ink is applied to a predetermined area, for example. The display control unit 18 presents the response image GP1B in the direction indicated by the operation information to indicate that the passenger PS has performed the shooting operation. In addition, the display control section 18 may display the response image GP1B so as to overlap with the shooting target. Specifically, the display control unit 18 may display the response image (the illustrated response image GP1B-1) so as to overlap the opposing vehicle. The display control unit 18 may display the response image (the response image GP1B-2 shown in the figure) so as to overlap the road sign. The display control unit 18 may display the response image (the response image GP1B-3 shown in the figure) so as to overlap with the signboard of the shop present in the periphery of the vehicle V. In addition, when there is no shooting target in the direction indicated by the instruction operation, the display control unit 18 may display a response image (the response image GP1B-4 shown in the drawing) on the road surface in the direction.

The display control unit 18 may be configured to display a predetermined region indicated by the response image GP1B by enlarging or reducing the region according to the approach or the distance of the shooting target in accordance with the movement of the vehicle V. Specifically, the display control unit 18 may be configured to enlarge or reduce a predetermined area indicated by the response image GP 1B.

[ example of response image showing score ]

Fig. 5 is a diagram showing an example of the response image GP1C indicating the score of the score information. The display control unit 18 displays a response image (the illustrated response image GP1C) indicating the score indicated by the score information and the score calculated based on the movement of the passenger PS on the display 60. As shown in fig. 5, the response image GP1C is, for example, an image in which a score is expressed by a numerical value. The display control unit 18 presents the response image GP1C and indicates the evaluation of the shooting action with respect to the passenger PS for the passenger PS. The response image GP1C may be an image in which the score is expressed by a rank.

Specifically, the response image GP1C may present an image indicating the rank of the passenger PS (for example, "beginner", "maturer", "rower", "celebrity", "senior", or the like) according to the score.

[ actions of in-vehicle performance apparatus ]

Fig. 6 is a flowchart showing an example of the operation of the in-vehicle rendering device 1 according to the first embodiment. The audio information acquiring unit 11 acquires audio information from the audio detecting unit 20 at all times or at predetermined time intervals (step S110). The operation information acquiring unit 12 acquires the operation information from the operation detecting unit 30 at all times or at predetermined time intervals (step S120). The image information acquiring unit 13 acquires image information from the imaging unit 40 at all times or at predetermined time intervals (step S130). The destination information acquiring unit 14 acquires destination information indicated by an operation input to the input unit 50 (step S140). The passenger PS inputs the destination information by the input unit 50, for example, when starting the traveling of the vehicle V.

The game management unit 15 manages the start and end of the game (step S145). When the operation input to the input unit 50 indicates the start of the shooting game, the game management unit 15 advances the process to step S150. The game management unit 15 ends the processing when the operation input to the input unit 50 does not indicate the start of the shooting game or indicates the end of the shooting game that has already been started.

The extraction unit 16 performs image processing on the surrounding image indicated by the image information acquired by the operation information acquisition unit 12 to extract the shooting target, and generates virtual position information (step S150). The extraction unit 16 extracts the shooting target and generates virtual position information, for example, each time the image information acquisition unit 13 acquires the image information. The score calculating unit 17 determines whether the passenger PS has performed the shooting operation based on the sound information acquired by the sound information acquiring unit 11 and the operation information acquired by the operation information acquiring unit 12 (step S160). When determining that the passenger PS has performed the shooting operation, the score calculation unit 17 calculates a score for the operation of the passenger PS based on the operation information, the image information acquired by the image information acquisition unit 13, and the virtual position information generated by the extraction unit 16 (step S170). When determining that the passenger PS has not performed the shooting operation, the score calculation unit 17 does not calculate the score for the operation of the passenger PS, and advances the process to step S180. The display control unit 18 displays the response image on the display 60 based on the sound information, the motion information, the destination information, and the virtual position information (step S180).

In the above description, the case where the shooting operation is an operation of generating a sound imitating a shooting sound and instructing a shooting target existing in the surrounding environment of the vehicle V with a finger has been described, but the shooting operation is not limited to this. The shooting operation may be, for example, either an operation of generating a sound imitating a shooting sound or an operation of instructing a shooting target present in the surrounding environment of the vehicle V. In this case, the in-vehicle rendering device 1 may not include a detection unit (the sound detection unit 20 or the motion detection unit 30) for detecting information not used for the shooting motion.

[ summary of the first embodiment ]

As described above, the in-vehicle rendering device 1 according to the present embodiment includes the motion detection unit 30 that detects the motion of the occupant PS of the vehicle V, and the display control unit 18 that controls the display of the response image (in this example, the response image GP1A-1) according to the motion of the occupant PS based on the motion of the occupant PS detected by the motion detection unit 30. Therefore, the in-vehicle entertainment apparatus 1 of the present embodiment can provide entertainment to the passenger PS moving in the vehicle V.

The display control unit 18 of the in-vehicle rendering device 1 according to the present embodiment displays the response images (response images GP1B-1 to GP1B-3 in this example) superimposed on the objects (shooting targets) in the surroundings of the vehicle V. In the in-vehicle rendering device 1 according to the present embodiment, the shooting target is the motion of an object that indicates the surrounding environment, the motion detection unit 30 detects the direction (in this example, motion information) indicated by the passenger PS, and the display control unit 18 displays the response images (in this example, the response images GP1B-1 to GP1B-4) based on the motion information detected by the motion detection unit 30. The in-vehicle rendering device 1 according to the present embodiment further includes a score calculating unit 17 that calculates a score based on the motion information and the virtual position information, and the display control unit 18 further displays a response image (in this example, the response image GP1C) indicating the score calculated by the score calculating unit 17. Thus, the in-vehicle performance apparatus 1 according to the present embodiment can display the response image on the display 60 through a performance with higher entertainment, and improve the enthusiasm of the passenger PS for the shooting operation.

[ rules and points concerning shooting game ]

In the above description, the configuration has been described in which the score calculating unit 17 calculates the score based on the direction of the instruction movement indicated by the movement information and the position of the shooting target, but the present invention is not limited to this. The score calculating unit 17 may be configured to calculate the score in accordance with the rules of the shooting game performed by the in-vehicle performance apparatus 1. The rules of the shooting game are, for example, rules in which a road sign in the destination direction indicated by the destination information is set as a target of shooting, rules in which a vehicle of a specific vehicle type is set as a target of shooting, rules in which a signboard of a shop is set as a target of shooting, and the like. In the case of this rule, a high score is calculated for the shooting action of the shooting target complying with the rule among the shooting targets extracted by the instruction extraction unit 16.

The score calculating unit 17 may be configured to calculate the score based on the area of the display response image GP1B (an image in which ink is applied to a predetermined area). In the following description, a rule for calculating a score based on the area of the display response image GP1B is referred to as a territorial area rule. In the area of the neck rule, the score calculating unit 17 calculates a score based on the area of the ink indicated by the response image GP 1B. Thus, the in-vehicle performance apparatus 1 according to the present embodiment can display the response image on the display 60 through a performance with higher entertainment, and can improve the enthusiasm of the passenger PS in the shooting operation.

In the above description, the case where the extraction unit 16, the score calculation unit 17, and the display control unit 18 do not execute various processes when the game management unit 15 performs the disabling process has been described, but the present invention is not limited to this. For example, when the game management unit 15 performs the disabling process, only the display control unit 18 may be configured not to execute various processes. In this case, the various functional units execute processing, but various images are not displayed on the display 60.

[ second embodiment ]

Hereinafter, a second embodiment of the present invention will be described with reference to the drawings. In the second embodiment, the in-vehicle rendering device 2 that displays an image according to the position of the vehicle V will be described. The in-vehicle rendering device 2 will be described, which displays an image displayed on the in-vehicle rendering device 2 mounted on another vehicle V on the display 60 of the device. The same components as those in the above-described embodiment are denoted by the same reference numerals, and description thereof is omitted.

Fig. 7 is a functional configuration diagram showing an example of the configuration of the in-vehicle rendering device 2 according to the second embodiment. The in-vehicle entertainment apparatus 2 includes a control unit 10A, a sound detection unit 20, a motion detection unit 30, an imaging unit 40, an input unit 50, a display 60, a position detection unit 70, a communication unit 80, and a storage unit 500.

The position detection unit 70 detects the position where the vehicle V travels. The position detecting unit 70 detects the position of the vehicle V by a method using a Global Navigation Satellite System (GNSS) such as gps (Global Positioning System) or a Regional Satellite Positioning System (RNSS) such as quasi-zenith satellites (QZS). Position detection unit 70 outputs position information indicating the position of vehicle V to control unit 10A.

The communication unit 80 communicates with a server (not shown) that collects information used for processing of the in-vehicle rendering device 2 by wireless communication. Examples of the wireless communication include short-range wireless communication by Wi-Fi (registered trademark), and wireless communication via a mobile communication network such as lte (long term evolution). It should be noted that direct communication may be performed between the vehicles V. In this case, the vehicle V communicates using an ad hoc network such as dsrc (differentiated Short Range communications).

The control unit 10A executes the program stored in the storage unit 500, and realizes the sound information acquisition unit 11, the operation information acquisition unit 12, the image information acquisition unit 13, the destination information acquisition unit 14, the game management unit 15, the extraction unit 16, the score calculation unit 17, the display control unit 18, and the position information acquisition unit 19 as functional units thereof.

The position information acquiring unit 19 acquires position information from the position detecting unit 70. The positional information acquisition unit 19 acquires positional information at all times or at predetermined time intervals. The positional information acquisition unit 19 outputs the acquired positional information to the display control unit 18 and the communication unit 80. The display control unit 18 of the present embodiment outputs the response image displayed on the display 60 to the communication unit 80 at all times or at predetermined time intervals.

The communication unit 80 associates the positional information input from the positional information acquisition unit 19 with the response image input from the display control unit 18, and transmits the result to the server. In the following description, information in which the positional information and the response image are associated with each other is referred to as positional image information. The communication unit 80 receives the position image information transmitted from the other in-vehicle rendering device 2 to the server. Specifically, the communication unit 80 receives the position image information associated with the position information based on the position information input from the position information acquisition unit 19. In other words, the communication unit 80 receives, from the server, the position image information associated with the position information indicating the current position of the vehicle V. The communication unit 80 outputs the received position image information to the display control unit 18.

The display control unit 18 of the present embodiment receives input of position image information from the communication unit 80. The display control unit 18 displays a response image of the position image information associated with the position information that matches the position information input from the position information acquisition unit 19 on the display 60. That is, the display control unit 18 causes the display 60 to display a response image that the other in-vehicle rendering device 2 has displayed at the current position of the vehicle V in the past.

[ example of display of response image displayed by other in-vehicle rendering device ]

Fig. 8 is a diagram illustrating an example of display of a response image displayed by another in-vehicle performance apparatus 2 according to the second embodiment. The display control unit 18 displays response images (illustrated response images GP1B-1 to GP1B-3) based on the processing of the present apparatus and response images (illustrated response images GP2-1 to GP2-3 and GP3-1 to GP3-2) displayed at the current position in the past by the rendering apparatus 2 in another vehicle. The illustrated response images GP2-1 to GP2-3 are response images that have been displayed by the rendering device 2 in the other vehicle at the position of the current vehicle V in the past. The illustrated response images GP3-1 to GP3-2 are response images that have been displayed by the in-vehicle rendering device 2 at the current position of the vehicle V in the past.

[ summary of the second embodiment ]

As described above, the position detection unit 70 of the in-vehicle rendering device 2 according to the present embodiment detects the position of the vehicle V, the communication unit 80 transmits and receives position image information, and the display control unit 18 displays the response images (the illustrated response images GP1B-1 to GP1B-3) processed by the own device and the response images (the illustrated response images GP2-1 to GP2-3 and the illustrated response images GP3-1 to GP3-2) displayed by the other in-vehicle rendering devices 2 at the current position of the vehicle V in the past. Thus, the in-vehicle rendering device 2 according to the present embodiment can display the display 60 with the response image of the shooting game performed by another person other than the passenger PS, thereby improving the competitiveness of the passenger PS and the enthusiasm for the shooting action.

[ Block under the rule of territory area ]

The shooting game performed by the in-vehicle rendering device 2 may be configured to make a decision between the passenger PS and another person according to the territorial area rule based on the position image information. Specifically, the score calculating unit 17 calculates the score based on the area of the response images (the response images GP2-1 to GP2-3 and the response images GP3-1 to GP3-2 shown in fig. 8) of the other person indicated by the position image information. The score calculating unit 17 calculates a score based on the area of the response images (response images GP1B-1 to GP1B-3 shown in fig. 8) displayed by the processing of the present apparatus.

The in-vehicle rendering device 2 determines the passenger PS of the in-vehicle rendering device 2 (vehicle V) that displays the response image with the high score as the winner.

[ decisions under the territorial area rule: team war ]

Further, a shooting game under the territorial area regulation may be performed by team battle. For example, a shooting game under the rules of territorial area may be played by a plurality of teams (a red team, a blue team, a yellow team, and the like). In this case, the score calculating unit 17 calculates the sum of the area of the response image of the color matching the color of the ink of the response image of the own apparatus in the response image of the position image information and the area of the response image displayed by the processing of the own apparatus as the score of the own apparatus. In addition, the in-vehicle rendering device 2 determines the team with the high score as the winner.

[ Lap area rule based on travel route ]

In the above description, the case where the shooting game of the territorial area rule is performed based on the area of the response image according to the shooting operation of the passenger PS has been described, but the present invention is not limited thereto. The in-vehicle rendering device 2 may perform the shooting game with the regular territory area based on the area of the route on which the vehicle V travels, based on the position information acquired by the position information acquiring unit 19. In this case, the display control unit 18 calculates an area by multiplying a route on which the vehicle V equipped with the own device travels by a predetermined value indicating a route width, and calculates a score according to the area. The communication unit 80 receives travel route information indicating a route on which the vehicle having the other in-vehicle rendering device 2 is to travel. The score calculation unit 17 calculates an area by multiplying the route traveled by the other vehicle indicated by the received travel route information by a predetermined value indicating the route width, and calculates a score according to the area. The in-vehicle rendering device 2 determines the passenger PS of the vehicle V having a high score as the winner.

[ display based on the rule of territory area of travel route ]

The display control unit 18 may be configured to display an image showing the area of the route on which the vehicle V travels and an image showing the area of the route shown by the received travel route information on the display 60. Fig. 9 is a diagram showing an example of the result of the shooting game based on the rule of the territorial area of the travel route. In this case, the display control unit 18 generates an image (illustrated image P1) indicating the route on which the vehicle V travels, based on the position information acquisition unit 19. The communication unit 80 receives travel route information indicating a route on which the vehicle V equipped with the other in-vehicle rendering device 2 travels. The display control unit 18 generates images (illustrated image P2 and image P3) indicating the travel route information received by the communication unit 80. The display controller 18 displays the images P1 to P3 generated on the display 60 in a superimposed manner with an image (illustrated image P5) showing a map of a predetermined range. Thus, the in-vehicle performance apparatus 2 according to the present embodiment can display the path along which the other than the passenger PS moves on the display 60, thereby improving the competitiveness of the passenger PS and the enthusiasm of the vehicle V due to movement.

When the passenger PS moves the vehicle V within a predetermined range for a long time for the purpose of winning the territory area rule, there is a possibility that traffic congestion occurs within the range. In this case, the automatic driving function of the vehicle V may control the travel of the vehicle V so as to move to the destination indicated by the destination information or so as to avoid staying in the range for a long time.

[ image showing the success or failure of shooting game under the rules of territory area ]

In addition, when the shooting game based on the rules of the territorial area of the travel route is played by team fighting, the display control unit 18 may display an image showing the success or failure of the game on the display 60. Fig. 10 is a diagram showing an example of the win or loss of the shooting game under the rule of the territory area. The display control unit 18 calculates the area of the present team and the area of the other teams based on the position information and the travel route information. In addition, the display control unit 18 generates an image (illustrated image P6) indicating a ratio of the area of the team to the area of another team in a range (illustrated japan nationwide) where the shooting game is executed based on the rule of the territorial area of the travel route, and displays the image on the display 60. The display control unit 18 may be configured to generate the image P6 in a predetermined range (for example, a range in which the kanto-kanto range) among ranges in which the shooting game is performed based on the rule of the territorial area of the travel route, and display the image P6. Thus, the in-vehicle performance apparatus 2 according to the present embodiment can improve the competitiveness of the passenger PS and the enthusiasm of the vehicle V due to movement.

[ Association of shooting Game with reality ]

The in-vehicle rendering device 2 may be configured to provide services, contents, and the like to the passenger PS based on the score calculated by the score calculating unit 17 and the result of the territorial area rule. For example, when the passenger PS performs a shooting operation to a signboard in a store located around the vehicle V, the in-vehicle performance apparatus 2 transmits information indicating that the shooting operation has been performed to the server apparatus in the store. The server device of the store provides the passenger PS with a coupon that can be used when purchasing the product of the store. In addition, when the vehicle V is a vehicle used by the car sharing service, the in-vehicle performance apparatus 2 transmits the score calculated by the score calculating unit 17 to the server apparatus of the operator who provides the service. The server device of the operator provides a coupon usable in the next car sharing service to the passenger PS who has obtained a high score. In addition, various coupons may be provided to the passenger PS who wins on the territory area rule or the passenger PS belonging to the winning team.

The in-vehicle rendering device 2 may perform a process of urging movement to a certain shooting target based on information indicating the behavior of each vehicle V (hereinafter, referred to as behavior information). In this case, the action information is, for example, position image information, position information, travel route information, and the like acquired by each in-vehicle performance apparatus 2.

The in-vehicle performance apparatus 2 acquires the action information acquired from the other in-vehicle performance apparatus 2 from a server (hereinafter, referred to as an action information server) storing the action information. The in-vehicle rendering device 2 may be configured to, for example, highlight a shooting target that many other people have performed shooting motions, a shooting target that is provided with a coupon for a store, or the like, based on the acquired action information. Specifically, the in-vehicle rendering device 2 may be configured to display the emphasized image so as to overlap with the position of the shooting target on the display 60. The in-vehicle rendering device 2 may be configured to display an image of the shooting target captured in the past by the imaging unit of the vehicle V mounted on another person on the display 60. The in-vehicle rendering device 2 may be configured to display an image indicating a moving direction to move to the position of the shooting target on the display 60.

Here, the passenger PS may move the vehicle V in the direction of the emphasized shooting target, the shooting targets on which many other people have shot, and the shooting targets of the coupons of the provided stores. Therefore, the in-vehicle rendering device 2 can urge the in-vehicle rendering device 2 (the vehicle V) to move to a certain shooting target based on the action information. The in-vehicle performance apparatus 2 detects the congestion state based on, for example, the position where the other in-vehicle performance apparatus 2 (vehicle V) indicated by the action information is located. The in-vehicle rendering apparatus 2 can move the vehicle V so as to avoid congestion by urging the shooting target to move to a position different from the crowded position (direction) so as not to move to the crowded position.

In addition, instead of the configuration in which the in-vehicle rendering device 2 performs the process of prompting the movement to a certain shooting target based on the action information, the action information server may perform the process of prompting the movement to a certain shooting target based on the action information. In this case, the action information server transmits an image prompting the movement of the in-vehicle performance apparatus 2 (vehicle V) and information prompting the movement to the in-vehicle performance apparatus 2, based on the action information acquired from each in-vehicle performance apparatus 2. The in-vehicle performance apparatus 2 displays an image acquired from, for example, a motion information server on the display 60.

Further, the target of shooting may be given an index indicating the frequency of shooting operations performed based on the action information acquired from each in-vehicle rendering device 2. The in-vehicle rendering device 2 or the action information server calculates an index of the shooting action performed based on the action information acquired from each in-vehicle rendering device 2, for example. The in-vehicle rendering device 2 may be configured to display the emphasis response image and the shooting target based on the index.

The in-vehicle rendering device 2 determines that there are many in-vehicle rendering devices 2 (vehicles V) around the periphery, that is, congestion, with respect to, for example, a shooting target to which a high index is given. In this case, the in-vehicle rendering device 2 may perform (filtering) processing for deemphasizing the shooting target or not regarding the shooting target as the shooting target. In addition, when the shooting target to which the high index is given is the shooting target of the coupon of the provided shop, the in-vehicle performance apparatus 2 (or the action information server) may perform processing for providing the coupon function to the shop or the like to which the shooting target (signboard) of the low index is given, among the series of shops of the shooting target shop. The in-vehicle rendering device 2 determines that there are not many in-vehicle rendering devices 2 (vehicles V) around the shooting target to which the low index is given, that is, that the shooting target is not crowded. In this case, the in-vehicle rendering device 2 may perform processing for emphasizing the shooting target.

By these processes, the in-vehicle rendering device 2 can urge the in-vehicle rendering device 2 (vehicle V) to move to a certain shooting target based on the action information (index).

[ output methods other than images ]

In the above description, the case where the display control unit 18 outputs the result of the shooting game by displaying various images has been described, but the present invention is not limited to this, and the display control unit 18 may output the result of the shooting game by voice, for example. Specifically, the vehicle V may be provided with a speaker, and the display control unit 18 may display the response image and output the firing sound from the speaker. The display control unit 18 may be configured to display the result and the success or failure of the shooting game indicated by the images P5 and P6 on the display 60, and to output a sound notifying the result and the success or failure of the shooting game from the speaker.

[ method of displaying an image other than a display ]

In the above description, the case where various images are displayed on the display 60 has been described, but the present invention is not limited to this. For example, in the case of a convertible vehicle or a vehicle without a front window, the head mount device may be used by the occupant PS in the vehicle. In this case, the head mount device displays the surrounding image of the vehicle captured by the imaging unit 40 and the response image in a superimposed manner. The vehicle V may be configured to include the display 60 in a side window in addition to the front window. In this case, the extraction unit 16 performs image recognition processing on the surrounding image, and recognizes the positions of various objects captured in the surrounding image on the surrounding image. The extraction unit 16 matches the position on the side window when the surrounding image is presented in full size on the side window with the position of the extracted object (shooting target) on the surrounding image. The extraction unit 16 outputs the position of the front window to the score calculation unit 17 and the display control unit 18 as virtual position information indicating a virtual position of the shooting target. When the side window is smaller than the front window in size, the target of the shooting operation performed from the side window is preferably a large-sized or large-area shooting target, that is, a shooting target in which the shooting operation is easily performed in a small area (side window).

[ method of displaying image based on vehicle information ]

The in-vehicle entertainment apparatus 1 and the in-vehicle entertainment apparatus 2 (hereinafter simply referred to as "in-vehicle entertainment apparatus") may be configured to detect the state of the vehicle V. In this case, the in-vehicle performance apparatus includes a vehicle state acquisition unit. The vehicle state acquisition unit acquires motion parameters such as the speed, steering angle, pitch and roll angle of the vehicle V. The display control unit 18 displays various images based on the motion parameters acquired by the vehicle state acquisition unit, for example. Specifically, the display control unit 18 performs a refresh process of the image displayed on the display 60 based on the speed of the vehicle V. More specifically, the display control unit 18 performs the refresh operation of the image displayed on the display 60 at intervals sufficiently faster than the change in the surrounding environment associated with the movement of the vehicle V based on the speed of the vehicle V.

The display control unit 18 displays an image corresponding to the traveling direction side of the vehicle V on the display 60 based on the steering angle, pitch, roll angle, and the like of the vehicle V among the acquired motion parameters. Here, the vehicle V may turn in a direction other than the straight traveling direction (right turn, left turn) based on the control. In this case, the occupant PS may direct the line of sight not toward the front of the vehicle V but toward the traveling direction, i.e., the turn-around direction. The display control unit 18 can display an image corresponding to the line of sight of the passenger PS by displaying an image corresponding to the traveling direction side of the vehicle V on the display 60 based on the motion parameter.

The display control unit 18 may be configured to display various images on the display 60 based on the vibration of the vehicle V. The display control unit 18 displays various images in a distance and a direction opposite to a distance and a direction in which the vehicle V moves along with the vibration, for example, in accordance with the vibration applied to the vehicle V. Thus, the in-vehicle rendering device 2 can suppress the image displayed on the display 60 from shaking due to the vibration. Therefore, the in-vehicle rendering device 2 can suppress the passenger PS from getting sick by visually checking the image on the display 60. The display control unit 18 may be configured to display various images on the display 60 based on the relative movement distance and direction between the vibration provided to the vehicle V and the vibration provided to the passenger PS.

[ third embodiment ]

Hereinafter, a third embodiment of the present invention will be described with reference to the drawings. In the third embodiment, the in-vehicle performance apparatus 3 that specifies an instruction target based on a three-dimensional point group in the real space in the direction instructed by the instruction operation of the passenger PS and generates a history of the instruction operation of the passenger PS will be described. The same components as those in the above-described embodiment are denoted by the same reference numerals, and description thereof is omitted. Here, the in-vehicle rendering device 3 is an example of the "instruction measurement device".

Fig. 11 is a functional configuration diagram showing an example of the configuration of the in-vehicle rendering apparatus 3 according to the third embodiment. The in-vehicle performance apparatus 3 (communication unit 80) according to the present embodiment transmits and receives information to and from a service provider terminal (hereinafter, service provider terminal apparatus TM) and the interface apparatus 800 via the network NW.

The interface device 800 includes a control unit 810 and a storage unit 820. The storage 820 stores object information 820-1 and server information 820-2. The object information 820-1 is information in which information indicating an object existing in the actual space indicated by the three-dimensional point group and information indicating the three-dimensional point group are associated with each other. The service provider information 820-2 is information in which information indicating the object and information indicating the service provider associated with the object are associated with each other.

The control unit 810 searches for the object information 820-1 and the service provider information 820-2 based on the information received from the in-vehicle entertainment apparatus 3, and transmits information indicating the object and information indicating the service provider to the in-vehicle entertainment apparatus 3.

The in-vehicle entertainment apparatus 3 includes a sound detection unit 20, a motion detection unit 30, an imaging unit 40, an input unit 50, a display 60, a position detection unit 70, a communication unit 80, and a line-of-sight detection unit 90. The in-vehicle rendering device 3 includes a control unit 10B in place of (or in addition to) the control unit 10 or the control unit 10A, and includes a storage unit 500A in place of (or in addition to) the storage unit 500. The storage unit 500A stores response image information 500-1, history information 500-2, and service provider information 500-3. Details of each information will be described later.

The control unit 10B executes the program stored in the storage unit 500A, and realizes the sound information acquisition unit 11, the motion information acquisition unit 12, the image information acquisition unit 13, the display control unit 18, the position information acquisition unit 19, the passenger information acquisition unit 111, the line of sight information acquisition unit 112, the coordinate acquisition unit 113, the object information acquisition unit 114, the service provider specifying unit 115, the history information generation unit 116, the notification unit 117, and the attribute specifying unit 118 as functional units thereof.

The passenger PS inputs information (hereinafter, referred to as a passenger ID) capable of identifying the passenger to the input unit 50, and the passenger information acquisition unit 111 acquires the passenger ID input to the input unit 50. The passenger PS inputs the passenger ID to the input unit 50 when riding in the vehicle V, for example.

The line-of-sight detecting unit 90 detects, for example, a position (hereinafter, referred to as an "viewpoint position") in an actual space that is visually recognized by the occupant PS captured by an imaging unit (not shown) that images the inside of the vehicle V. The line of sight information acquisition unit 112 acquires information indicating the viewpoint position of the passenger PS detected by the line of sight detection unit 90.

The coordinate acquisition unit 113 acquires a three-dimensional point group in the real space in the direction indicated by the instruction operation of the occupant PS based on the instruction operation of the occupant PS acquired by the operation information acquisition unit 12 and the viewpoint position of the occupant PS acquired by the sight line information acquisition unit 112, based on the surrounding environment image of the vehicle V captured by the imaging unit 40. The coordinate acquisition unit 113 detects, for example, feature points of an object that is shown in the surrounding image of the vehicle V captured by the imaging unit 40 and is present in the actual space in the indicated direction, and acquires a set of the detected feature points as a three-dimensional point group in the actual space in the indicated direction. In this case, the imaging unit 40 is a stereo camera.

The object information acquiring unit 114 acquires information indicating an object existing in the actual space based on information indicating a three-dimensional point group in the actual space in the direction specified by the coordinate acquiring unit 113. The object information acquiring unit 114 transmits, for example, information indicating a three-dimensional point group in the real space in the direction specified by the coordinate acquiring unit 113 to the interface device 800 via the network NW. When receiving information indicating a three-dimensional point group from the in-vehicle rendering device 3, the control unit 810 of the interface device 800 searches the object information 820-1 using the information as a search key, and specifies an object associated with the three-dimensional point group indicated by the information. The control unit 810 transmits information indicating the identified object to the in-vehicle rendering device 3 via the network NW. The object information acquiring unit 114 acquires information indicating the object received from the interface device 800.

The service provider specifying unit 115 specifies a service provider associated with the object acquired by the object information acquiring unit 114. The service provider identifying unit 115 transmits the information indicating the object acquired by the object information acquiring unit 114 to the interface device 800 via the network NW, for example. When the information indicating the object is acquired from the in-vehicle rendering device 3, the control unit 810 of the interface device 800 searches the service provider information 820-2 using the information as a search key, and specifies a service provider associated with the object indicated by the information. The control unit 810 transmits information indicating the identified object to the in-vehicle rendering device 3 via the network NW. The service provider specifying unit 115 acquires information indicating a service provider received from the interface device 800.

The history information generating unit 116 generates history information 500-2 indicating the history of the instruction target instructed by the instruction operation of the passenger PS.

Fig. 12 is a diagram showing an example of the contents of the history information 500-2. As shown in fig. 12, the history information 500-2 is information in which the passenger ID, the date and time at which the passenger PS performed the instruction operation (hereinafter, referred to as the instruction date and time), the instruction target instructed by the instruction operation of the passenger PS, and the service provider corresponding to the instruction target are associated with each other for each passenger ID. The history information generating unit 116 associates the date and time of the instruction operation of the passenger PS acquired by the operation information acquiring unit 12, the object instructed by the instruction operation indicated by the information acquired by the object information acquiring unit 114, the service object person corresponding to the instructed object indicated by the information acquired by the service provider specifying unit 115, and the passenger ID of the passenger PS, for example, to generate history information 500-2, and stores the history information in the storage unit 500A. The relationship between the object and the service provider can be arbitrarily set, and for example, it is not essential that the object is the owner of the service provider. Further, an object having no business (for example, a logo, a signal, or the like in the first embodiment) may be defined as an object associated with the service provider.

The notification unit 117 notifies the service provider associated with the object indicated by the passenger PS that the passenger PS indicates the indication target.

Fig. 13 is a diagram showing an example of the contents of the service provider information 500-3. As shown in fig. 13, the service provider information 500-3 is information indicating that information indicating a service provider (service provider name shown in the figure) and a notification object are associated with each other. The notification unit 117 searches the service provider information 500-3 using the service provider identified by the service provider identification unit 115 as a search key, and notifies the notification target associated with the service provider that the passenger PS has performed an instruction operation to have an interest in the object associated with the service provider. The notification object shows, for example, the address of the service provider terminal device TM (e.g., server device).

The notification unit 117 may notify the service provider each time the passenger PS performs the instruction operation, or may extract each service provider associated with the history information 500-2 for a predetermined period and notify the service provider. The notification unit 117 may notify the history information 500-2 to the passenger PS in addition to the service provider. Thus, the passenger PS can use the history information 500-2 as information indicating the interest level of the passenger PS. The passenger PS can acquire information on the service provider that has performed the instruction operation by referring to the history information 500-2 notified.

[ summary of third embodiment ]

As described above, the in-vehicle rendering device 3 of the present embodiment includes: an operation detection unit 30 that detects an operation of a passenger PS of the vehicle V; a sight line detection unit 90 that detects the viewpoint position of the passenger PS; a sight line information acquisition unit 112 that acquires a three-dimensional point group in the actual space in the direction of the instruction operation of the occupant PS based on the instruction operation of the occupant PS detected by the operation detection unit 30 and the viewpoint position of the occupant PS detected by the sight line detection unit 90; an object information acquiring unit 114 that acquires, from the interface device 800 that supplies information indicating an object existing in the actual space indicated by the three-dimensional point group, information indicating an object associated with the three-dimensional point group acquired by the coordinate acquiring unit 113; a service provider specifying unit 115 that specifies a service provider associated with the object indicated by the information acquired by the object information acquiring unit, based on service provider information 820-2 indicating a service provider associated with the object; and a history information generating unit 116 that generates history information 800-2 in which the service provider specified by the service provider specifying unit 115 is associated with the object indicated by the information acquired by the object information acquiring unit 114, and acquires the history of the instructed operation performed by the passenger PS, thereby enabling collection of information related to the interest of the passenger PS. Further, according to the in-vehicle performance apparatus 3 of the present embodiment, it is possible to notify the service provider associated with the object to which the instruction operation has been performed that the instruction operation has been performed.

[ three-dimensional dot group to be transmitted to the interface device 800 ]

In the above description, the case where the information indicating the three-dimensional point group acquired by the coordinate acquisition unit 113 is transmitted to the interface device 800 has been described, but the present invention is not limited to this. The coordinate acquisition unit 113 may be configured to select a three-dimensional point group to be transmitted to the interface device 800 from among the acquired three-dimensional point groups based on the attribute of the surrounding environment of the vehicle V, and to transmit the three-dimensional point group.

In this case, the attribute determining unit 118 determines the attribute of the surrounding environment of the vehicle V based on the surrounding environment image of the vehicle V captured by the imaging unit 40. The attribute is, for example, a property (characteristic) of the surrounding environment of the vehicle V such as an expressway and a shop-dense area. The attribute specifying unit 118 analyzes whether or not a traffic road sign of an expressway is captured or whether or not a number of shop signs are captured in the surrounding image of the vehicle V captured by the imaging unit 40, and specifies the attribute of the surrounding environment of the vehicle V. Here, the properties (characteristics) of the surrounding environment are given as an example of the "attribute", but the present invention is not limited thereto. The "attribute" may be a plurality of attributes of some or all of the recognizable objects, which are specified by way of example with respect to coordinates, directions, and the like.

The attribute specifying unit 118 may be configured to transmit the surrounding image of the vehicle V captured by the imaging unit 40 to the interface device 800 via the network NW, and the interface device 800 may be configured to specify the attribute of the surrounding of the vehicle V. In this case, the attribute specifying unit 118 specifies the attribute by transmitting the surrounding environment image of the vehicle V captured by the imaging unit 40 via the network NW and acquiring information indicating the attribute of the surrounding environment of the vehicle V from the interface device 800.

The attribute specifying unit 118 may be configured to specify the attribute of the environment around the vehicle V based on, for example, position attribute information in which the position information and the attribute are associated with each other. In this case, the position attribute information may be stored in the storage unit 500A or the storage unit 820. When the position attribute information is stored in the storage unit 820, the attribute specifying unit 118 specifies the attribute by transmitting information indicating the current position of the vehicle V via the network NW and acquiring information indicating the environment around the vehicle V from the interface device 800.

The coordinate acquisition unit 113 transmits information indicating the three-dimensional point group to the interface device 800 based on the attribute specified by the attribute specification unit 118. Here, when the vehicle V moves at a high speed on the "expressway", there is a possibility that the amount of data of the three-dimensional point group acquired in the instruction operation increases. Therefore, for example, when the specified attribute is "expressway", the coordinate acquisition unit 113 can make the range of acquiring the three-dimensional point group instructed by the instruction operation narrower than usual, and can suppress the amount of data to be transmitted to the interface device 800.

In addition, when the vehicle V moves in the "shop-dense area", there may be a plurality of objects (i.e., signs) instructed by the instruction operation. Therefore, for example, when the specified attribute is "a region with a dense store", the coordinate acquisition unit 113 can make the range of acquiring the three-dimensional point group instructed by the instruction operation wider than usual, and can acquire a large amount of information indicating the object from the interface device 800. Thus, the in-vehicle performance apparatus 3 can generate the history information 500-2 with enriched information. That is, based on the instruction operation including the operation of the passenger not related to the driving, which is measured by the instruction measurement device as one aspect of the invention, various displays and performances can be performed for the passenger as described in the first embodiment, and on the other hand, by appropriately recording and holding the instruction operation and the history thereof, it is possible to reuse information and to use it for other purposes.

Although the embodiments of the present invention have been described in detail with reference to the drawings, the specific configuration is not limited to the embodiments, and modifications can be appropriately made within the scope not departing from the gist of the present invention. The structures described in the above embodiments may be combined.

31页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:模拟系统、图像处理方法以及信息存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类