Information processing device, information processing system, information processing program, and information processing method

文档序号:1525643 发布日期:2020-02-11 浏览:20次 中文

阅读说明:本技术 信息处理装置、信息处理系统、信息处理程序以及信息处理方法 (Information processing device, information processing system, information processing program, and information processing method ) 是由 二宫知子 虫壁和也 须山明彦 于 2017-06-21 设计创作,主要内容包括:信息处理装置(1)具备输出部(43)、受理部(44)、存储部(41)、位置确定处理部(45)。输出部(43)向多个音响设备(3A~3F)输出检测信号。受理部(44)基于从多个音响设备(3A~3F)输出的应答信号,受理多个音响设备的每一个的配置。存储部(41)存储表示多个音响设备(3A~3F)的配置的配置数据。位置确定处理部(45)将受理部(44)所受理的配置分配给在配置数据中包含的多个音响设备(3A~3F)中的任意一个,并且使存储部(41)存储被分配给了配置数据的配置。(The information processing device (1) is provided with an output unit (43), a reception unit (44), a storage unit (41), and a position determination processing unit (45). The output unit (43) outputs detection signals to the plurality of acoustic devices (3A-3F). A reception unit (44) receives the arrangement of each of the plurality of acoustic devices (3A-3F) on the basis of the response signals output from the plurality of acoustic devices. A storage unit (41) stores configuration data indicating the configuration of a plurality of acoustic devices (3A-3F). A position determination processing unit (45) assigns the arrangement received by the receiving unit (44) to any one of the plurality of acoustic devices (3A-3F) included in the arrangement data, and causes the storage unit (41) to store the arrangement assigned to the arrangement data.)

1. An information processing apparatus includes:

an output unit that outputs detection signals to the plurality of acoustic devices;

a reception unit configured to receive the arrangement of each of the plurality of acoustic devices based on response signals output from the plurality of acoustic devices that have received the detection signal;

a storage unit that stores configuration data indicating a configuration of the plurality of acoustic devices; and

a position specification processing unit that assigns the arrangement accepted by the acceptance unit to any one of the plurality of acoustic devices included in the configuration data, and causes the storage unit to store the arrangement assigned to the configuration data.

2. The information processing apparatus according to claim 1,

the receiving unit is configured to receive a center position,

the information processing apparatus further includes a channel allocation unit configured to allocate a channel to each of the plurality of audio devices in accordance with the center position received by the reception unit.

3. The information processing apparatus according to claim 2,

the center position is stored in the storage section,

the channel allocation unit is configured to allocate a channel corresponding to the 1 st center position to each of the plurality of acoustic devices when the 1 st center position, which is the center position newly received by the reception unit, is different from the 2 nd center position, which is the center position stored in the storage unit.

4. The information processing apparatus according to any one of claim 1 to claim 3,

the output unit is configured to transmit an estimation signal for estimating the plurality of acoustic devices existing in a predetermined space, and to transmit the detection signal to the acoustic device that has received the estimation signal.

5. The information processing apparatus according to any one of claim 1 to claim 4,

a display unit for displaying a layout diagram based on the layout data,

the information processing device is configured to receive the arrangement by receiving an operation performed by a user using the arrangement diagram.

6. An information processing system is provided with:

an information processing apparatus as set forth in any one of claim 1 to claim 5; and

the plurality of acoustic devices that output the response signal when the detection signal output from the information processing apparatus is received.

7. The information processing system of claim 6,

the answer signal is a sound.

8. An information processing program is provided with:

outputting a detection signal to a plurality of acoustic devices;

receiving, at a receiving unit, a configuration of each of the plurality of acoustic devices based on response signals output from the plurality of acoustic devices that have received the detection signal;

a step of assigning the arrangement accepted by the acceptance unit to any one of the arrangement data stored in the storage unit; and

a step of causing the storage unit to store the configuration assigned to the configuration data.

9. An information processing method, wherein,

outputs the detection signals to a plurality of acoustic devices,

a receiving unit that receives the detection signal and receives the arrangement of each of the plurality of acoustic devices based on response signals output from the plurality of acoustic devices that have received the detection signal,

storing configuration data representing the configuration of the plurality of acoustic devices in a storage section,

assigning the configuration accepted by the acceptance unit to any one of the plurality of acoustic devices included in the configuration data stored in the storage unit,

causing the storage section to store the configuration assigned to the configuration data.

10. The information processing method according to claim 9,

the central position is received in the receiving unit,

and assigning a channel to each of the plurality of acoustic devices in accordance with the center position received by the receiving unit.

11. The information processing method according to claim 10,

storing the center position in the storage section,

when a1 st center position, which is a center position newly received by the receiving unit, is different from a 2 nd center position, which is a center position stored in the storage unit, a channel corresponding to the 1 st center position is assigned to each of the plurality of acoustic devices.

12. The information processing method according to any one of claim 9 to claim 11,

transmitting an inference signal for inferring presence of the plurality of acoustic devices in a desired space,

and transmitting the detection signal to the acoustic device that has received the estimation signal.

13. The information processing method according to any one of claim 9 to claim 12,

displaying a layout based on the layout data on a display,

the configuration is accepted by accepting an operation performed by a user using the configuration map.

Technical Field

One embodiment of the present invention relates to an information processing apparatus, an information processing system, an information processing program, and an information processing method, and more particularly to an information processing apparatus, an information processing system, an information processing program, and an information processing method for specifying the arrangement of acoustic equipment.

Background

Conventionally, there is a multichannel audio system having a plurality of channels and a number of speakers corresponding to the channels (for example, patent document 1).

In a multichannel audio system, a signal processing section of an amplifier device performs channel allocation processing in order to construct a multichannel playback environment. Thus, in the multichannel audio system, it is determined which position of the plurality of (9) speakers is used (the position of the plurality of speakers is determined).

In the channel allocation process, the user arranges microphones on the left, right, front, and rear sides of the viewing position, respectively, and the microphones pick up measurement sounds output from the respective speakers. The distance between the position of each microphone and each speaker is measured using sound pickup data picked up by the microphone. Based on these distances, the multi-channel audio system determines which position of the plurality of speakers is the speaker.

Disclosure of Invention

Problems to be solved by the invention

The multi-channel audio system (information processing apparatus) of patent document 1 uses microphones to determine the positions of a plurality of speakers (acoustic devices). The multi-channel audio system then needs to take 4 measurements for each of the multiple speakers. In this multichannel audio system, one microphone is used, and the user arranges the microphone at 4 points in the front, rear, left, and right of the viewing position in this order. In such a multi-channel audio system, the number of measurements is large, and it takes time to determine the positions of a plurality of speakers due to the movement of the microphone by the user. As a result, in the multichannel audio system of patent document 1, the construction of the multichannel playback environment may become complicated.

Therefore, an object of the present invention is to provide an information processing apparatus, an information processing system, an information processing program, and an information processing method that can more easily determine the arrangement of an acoustic device.

Means for solving the problems

An information processing device according to an embodiment of the present invention includes: an output unit that outputs detection signals to the plurality of acoustic devices; a reception unit configured to receive the arrangement of each of the plurality of acoustic devices based on response signals output from the plurality of acoustic devices that have received the detection signal; a storage unit that stores configuration data indicating a configuration of the plurality of acoustic devices; and a position specification processing unit configured to assign the arrangement accepted by the acceptance unit to any one of the plurality of acoustic devices included in the configuration data, and to cause the storage unit to store the arrangement assigned to the configuration data.

Effects of the invention

According to an embodiment of the present invention, the configuration of the acoustic device can be determined more easily.

Drawings

Fig. 1 is a block diagram showing the configuration of an information processing system.

Fig. 2 is a schematic diagram showing an example of a space constituting an information processing system.

Fig. 3 is a block diagram showing the configuration of an acoustic apparatus.

Fig. 4 is a block diagram showing the structure of an AV receiver.

Fig. 5 is a correspondence table showing an example of information on a plurality of acoustic devices.

Fig. 6 is a block diagram showing the configuration of the information processing apparatus.

Fig. 7 is an explanatory diagram showing an example of the layout diagram displayed on the display unit.

Fig. 8 is a flowchart showing the operation of the information processing system.

Fig. 9 is a flowchart showing operations of the information processing device and each acoustic device in the estimation process of the information processing system.

Fig. 10 is a flowchart showing operations of the information processing apparatus and the acoustic device in the position determination processing of the acoustic device of the information processing system.

Fig. 11 is a flowchart showing an operation of the information processing apparatus in the channel assignment process of the information processing system.

Detailed Description

An information processing device 4, an information processing program, and an information processing system 10 according to an embodiment of the present invention will be described with reference to the drawings.

First, the information processing system 10 is explained with reference to fig. 1 and 2. Fig. 1 is a block diagram showing a configuration of an information processing system 10 according to an embodiment of the present invention. Fig. 2 is a schematic diagram showing an example of a space (living room r1 and bedroom r2) constituting the information processing system 10.

In the information processing apparatus 4, the information processing program, and the information processing system 10 of the present embodiment, the information processing apparatus 4 identifies the acoustic devices 3A to 3F to which the content is to be distributed. Then, in the information processing apparatus 4, the information processing program, and the information processing system 10, the arrangement of the acoustic devices that are targets of distribution of the content is determined, and the channel setting of these acoustic devices is performed.

As shown in fig. 1, the information processing system 10 includes an audio player 1, an AV receiver 2, a plurality of audio devices 3A to 3F, and an information processing apparatus 4. The information processing system 10 is, for example, a room having a plurality of spaces, and outputs content (music) played by the audio player 1 from one or more acoustic devices 3A to 3F. The plurality of acoustic devices 3A to 3F can move in one space (room) or another space. That is, the plurality of acoustic devices 3A to 3F are not always arranged at the same position in the same space. Therefore, the information processing system 10 is configured to appropriately identify the acoustic devices 3A to 3F arranged in the space desired by the user and output appropriate contents from the acoustic devices 3A to 3F. In the information processing system 10, the information processing apparatus 4 estimates, by the operation of the user, where in the space desired by the user, the acoustic device among the plurality of acoustic devices 3A to 3F is arranged.

The audio player 1 is a device that plays content, for example a CD player or a DVD player. In the information processing system 10 of the present embodiment, the audio player 1 is disposed in the living room r1 as shown in fig. 2. The audio player 1 is connected to the AV receiver 2 by wireless or wire. The audio player 1 transmits the played content to the AV receiver 2. The audio player 1 is not limited to the example of being disposed in the living room r1, and may be disposed in the bedroom r 2. Further, the information processing system 10 may be provided with a plurality of audio players 1.

The AV receiver 2 constructs a wireless LAN using a router having a wireless access point function. For example, the AV receiver 2 is connected to the audio player 1, the plurality of acoustic devices 3A to 3F, and the information processing apparatus 4 via, for example, a wireless LAN.

For example, as shown in fig. 2, the AV receiver 2 is disposed in the living room r1 and near the television 5. The AV receiver 2 is not limited to the example of being disposed near the television 5, and may be disposed in a room such as a bedroom r 2.

The AV receiver 2 is not limited to the example of obtaining the content from the audio player 1, and may download the content from a content server via the internet (for example, internet broadcasting). Further, the AV receiver 2 may be connected to the plurality of acoustic devices 3A to 3F via a LAN cable. Further, the AV receiver 2 may also have the function of the audio player 1.

The plurality of acoustic devices 3A to 3F are, for example, speakers or devices having a speaker function. The plurality of audio devices 3A to 3F are disposed in a plurality of different spaces in a room, for example, in living room r1 and bedroom r 2. The plurality of acoustic apparatuses 3A to 3F output sounds based on the signals output from the AV receiver 2. The plurality of acoustic apparatuses 3A to 3F are connected to the AV receiver 2 by wireless or wire.

The information processing device 4 is a portable mobile terminal such as a smartphone. The user transmits and receives information to and from the AV receiver 2 using a dedicated application downloaded in advance to the information processing apparatus 4.

Next, details of the AV receiver 2, the plurality of acoustic devices 3A to 3F, and the information processing apparatus 4 according to the present embodiment will be described. Fig. 3 is a block diagram showing the configuration of each acoustic device. Fig. 4 is a block diagram showing the configuration of the AV receiver 2.

As shown in fig. 2, of the plurality of (6) audio devices 3A to 3F, the 1 st audio device 3A, the 2 nd audio device 3B, the 3 rd audio device 3C, and the 4 th audio device 3D are disposed in the living room r 1. The 1 st audio device 3A, the 2 nd audio device 3B, the 3 rd audio device 3C, and the 4 th audio device 3D are disposed at different positions in the living room r 1. Further, of the plurality of acoustic devices 3A to 3F, the 5 th acoustic device 3E and the 6 th acoustic device 3F are disposed in the bedroom r 2. The 5 th and 6 th acoustic devices 3E and 3F are disposed at different positions in the bedroom r 2. The number and arrangement of the plurality of acoustic devices 3A to 3F are not limited to those shown in the present embodiment.

Here, in fig. 3, the 1 st acoustic device 3A is explained as an example. The other audio devices (the 2 nd audio device 3B, the 3 rd audio device 3C, the 4 th audio device 3D, the 5 th audio device 3E, and the 6 th audio device 3F) all have the same configuration and function. The 1 st acoustic device 3A includes a CPU31, a communication unit 32, a RAM33, a ROM34, and a speaker 35. The 1 st acoustic device 3A further includes a microphone 36.

The CPU31 controls the communication section 32, RAM33, ROM34, speaker 35, and microphone 36.

The communication unit 32 is, for example, a wireless communication unit conforming to the Wi-Fi standard. The communication unit 32 communicates with the AV receiver 2 via a router with a wireless access point. The communication unit 32 can also communicate with the information processing apparatus 4 in the same manner.

The ROM34 is a storage medium. The ROM34 stores programs for operating the CPU 31. The CPU31 executes by reading out programs stored in the ROM34 to the RAM33, thereby performing various processes.

The speaker 35 has a D/a converter that converts a digital audio signal into an analog audio signal and an amplifier that amplifies the audio signal. The speaker 35 outputs sound (e.g., music) based on a signal input from the AV receiver 2 via the communication section 32.

The microphone 36 receives an inference signal (for example, a test sound) output from the information processing apparatus 4. In other words, the microphone 36 picks up the test sound as the inference signal output from the information processing apparatus 4. If the microphone 36 picks up the test sound, the CPU31 outputs, for example, a buzzer sound as a response signal. In addition, the response signal is output from the speaker 35.

The response signal is not limited to the test sound. The CPU31 may transmit the response signal as data to the information processing apparatus 4 directly or via the communication section 32. Further, the answer signal may be light, or may be both of test sound and light. In this case, the 1 st acoustic device 3A includes a light emitting element such as an LED, and the CPU31 causes the light emitting element to emit light as a response signal.

As shown in fig. 4, the AV receiver 2 includes a CPU21, a content input unit 22, a communication unit 23, a DSP24, a ROM25, and a RAM 26.

The CPU21 controls the content input section 22, the communication section 23, the DSP24, the ROM25, and the RAM 26.

The content input section 22 communicates with the audio player 1 by wire or wirelessly. The content input section 22 acquires content from the audio player 1.

The communication unit 23 is, for example, a wireless communication unit conforming to the Wi-Fi standard. The communication unit 23 communicates with each of the plurality of audio apparatuses 3A to 3F via a router with a wireless access point. In addition, when the AV receiver 2 has a router function, the communication unit 23 directly communicates with each of the plurality of acoustic devices 3A to 3F.

The DSP24 performs various signal processes on signals input to the content input section 22. When receiving the encoded data as a signal of the content, the DSP24 performs signal processing such as decoding the encoded data and extracting an audio signal.

The ROM25 is a storage medium. The ROM25 stores programs for operating the CPU 21. The CPU21 executes by reading out programs stored in the ROM25 to the RAM26, thereby performing various processes.

Further, the ROM25 stores information of the plurality of acoustic apparatuses 3A to 3F. Fig. 5 is a correspondence table showing an example of information of the plurality of acoustic devices 3A to 3F stored in the ROM 25. Each of the plurality of audio devices 3A to 3F stores information such as an IP address, a MAC address, a location (configuration), and a channel in association with each other.

The communication unit 23 receives data from the information processing apparatus 4. The content input section 22 acquires the content of the audio player 1 based on the received data. Then, the communication unit 23 transmits audio data to each of the plurality of acoustic apparatuses 3A to 3F based on the content received from the audio player 1 by the content input unit 22.

The communication unit 23 transmits and receives data to and from the information processing device 4. When the user accepts a setting operation or the like, the information processing apparatus 4 transmits a start notification to the AV receiver 2. The communication unit 23 receives the start notification transmitted from the information processing apparatus 4. Then, upon receiving the start notification, the communication unit 23 transmits a sound pickup start notification to the plurality of acoustic apparatuses 3A to 3F so that the microphones 36 of the plurality of acoustic apparatuses 3A to 3F are in a sound pickup state. Further, the information processing apparatus 4 transmits an end notification to the AV receiver 2 in accordance with a timeout or an operation by the user. The communication unit 23 receives the end notification from the information processing apparatus 4. When the microphones 36 of the plurality of acoustic apparatuses 3A to 3F are in the sound pickup state, the communication unit 23 transmits a sound pickup end notification to each of the acoustic apparatuses 3A to 3F so that the microphones 36 of the plurality of acoustic apparatuses 3A to 3F are in the sound pickup stop state.

However, a specific IP address (local address) is assigned to each of the 1 st audio device 3A, the 2 nd audio device 3B, the 3 rd audio device 3C, the 4 th audio device 3D, the 5 th audio device 3E, and the 6 th audio device 3F. The IP addresses of the 1 st audio device 3A, the 2 nd audio device 3B, the 3 rd audio device 3C, the 4 th audio device 3D, the 5 th audio device 3E, and the 6 th audio device 3F are assigned by the AV receiver 2. The IP addresses of the 1 st, 2 nd, 3C, 4 th, 5 th, and 6 th audio devices 3A, 3B, 3C, 3D, 3E, and 3F may be assigned by a router or the like.

Further, each of the 1 st, 2 nd, 3D, 3C, 4 th, 5 th and 6 th audio devices 3A, 3B, 3C, 3E and 3F has a MAC address as specific individual identification information. In addition, the individual identification information may be identification information such as a serial number or an ID number. The IP address and the MAC address are associated with the plurality of audio devices 3A to 3F in advance one-to-one. The associated information is stored in the AV receiver 2.

The information processing device 4 is a portable mobile terminal such as a smartphone. Fig. 6 is a block diagram showing the configuration of the information processing apparatus 4. The information processing device 4 includes a CPU40, a storage unit 41, a display unit 42, an output unit 43, a reception unit 44, a position specification processing unit 45, a channel assignment unit 46, and a RAM 47. In addition to the above-described configuration, the information processing apparatus 4 has functions and a configuration existing in a smartphone.

The information processing device 4 may be any device that can be operated by a user, such as a tablet PC, a smart watch, or a PC.

The CPU40 performs various processes by exclusively loading and executing programs stored in the storage section 41 to the RAM47, for example.

The output unit 43 transmits an estimation signal for estimating the arrangement in the space of the plurality of acoustic devices 3A to 3F existing in the predetermined space, and transmits a detection signal to the acoustic device that has received the estimation signal. The output unit 43 includes a speaker, a light emitting element, an infrared transmitter, an antenna, and the like, and can output sound, light, infrared rays, or signals. In the information processing apparatus 4 of the present embodiment, the output unit 43 outputs a sound from a speaker, for example, a buzzer sound as an estimation signal. The output unit 43 outputs the buzzer sound having a size that can be picked up only by a plurality of acoustic devices (for example, the 1 st acoustic device 3A, the 2 nd acoustic device 3B, the 3 rd acoustic device 3C, and the 4 th acoustic device 3D) arranged in a predetermined space (for example, the living room r 1). Thus, in the information processing system 10, only the acoustic devices that pick up the buzzer sound (for example, the 1 st acoustic device 3A, the 2 nd acoustic device 3B, the 3 rd acoustic device 3C, and the 4 th acoustic device 3D) become the target of estimation.

The estimation signal is not limited to sound, and may be light, infrared, or the like. For example, when the estimation signal is light, the output unit 43 causes the light emitting element to emit light. When the estimated signal is infrared light, the output unit 43 outputs infrared light from the infrared transmitter.

Further, the output unit 43 outputs the detection signal to a plurality of acoustic devices (for example, the 1 st acoustic device 3A, the 2 nd acoustic device 3B, the 3 rd acoustic device 3C, and the 4 th acoustic device 3D). More specifically, the output unit 43 outputs the detection signal to the audio devices (for example, the 1 st audio device 3A, the 2 nd audio device 3B, the 3 rd audio device 3C, and the 4 th audio device 3D) to be estimated directly or via the AV receiver 2. The output unit 43 outputs the detection signal to an audio apparatus (for example, the 1 st audio apparatus 3A) desired by the user directly or via the AV receiver 2.

Further, the output unit 43 transmits a start notification for notifying the start of the estimation processing and an end notification for notifying the end of the estimation processing to the plurality of acoustic apparatuses 3A to 3F directly or via the AV receiver 2. Thereby, each of the plurality of acoustic devices 3A to 3F sets the microphone 36 to the sound pickup state or the sound pickup stop state.

The storage unit 41 stores various programs executed by the CPU 40. The storage unit 41 stores arrangement data indicating the arrangement in the space of the plurality of acoustic devices 3A to 3F. The arrangement data is data in which the plurality of acoustic devices 3A to 3F are associated with the space and the arrangement. According to the leg shortening process, the storage unit 41 stores each of the plurality of acoustic devices 3A to 3F in association with a space in which each of the plurality of acoustic devices 3A to 3F is arranged. For example, the storage unit 41 stores arrangement data in which the 1 st, 2 nd, 3C, and 4 th audio devices 3A, 3B, and 3D arranged in the living room r1 are associated with the living room r 1. The storage unit 41 also stores the arrangement data in which the 5 th and 6 th audio devices 3E and 3F arranged in the bedroom r2 are associated with the bedroom r 2.

The arrangement is information indicating, for example, where the 1 st, 2 nd, 3C, and 4 th audio devices 3A, 3B, 3C, and 3D are arranged in the living room r 1. According to the position specification processing, the storage unit 41 stores the arrangement of each of the plurality of acoustic devices 3A to 3F in association with each of the plurality of acoustic devices 3A to 3F.

The Display unit 42 has a screen for displaying an application downloaded from the information processing apparatus 4, for example, an LCD (Liquid Crystal Display). The user can operate the application by clicking or sliding on the screen.

The display unit 42 displays a layout diagram based on the layout data. Fig. 7 is an explanatory diagram showing an example of the layout diagram displayed on the display unit 42. As shown in fig. 7, a correspondence table in which the 1 st, 2 nd, 3C, and 4 th audio devices 3A, 3B, and 3D arranged in the living room r1 are associated with the arrangement and channel to be selected later is displayed on the upper side of the screen of the display unit 42. In addition, a simplified diagram (layout diagram) imitating the living room r1 is displayed below the screen. In the arrangement diagram, arrangement locations a1 to a4 showing the arrangement of the acoustic devices are displayed. Thus, the user operates the screen to associate the 1 st, 2 nd, 3C, and 4 th audio devices 3A, 3B, 3C, and 3D with the arrangement locations a1 to a4 in a one-to-one correspondence, thereby associating the 1 st, 2 nd, 3C, and 4 th audio devices 3D with the arrangement locations.

For example, the reception unit 44 configured by a touch panel receives the arrangement of each of the 1 st, 2 nd, 3C, and 4 th acoustic devices 3A, 3B, 3C, and 3D, for example, based on the response signals output from the plurality of acoustic devices (for example, the 1 st, 2 nd, 3C, and 4 th acoustic devices 3D) that have received the detection signal. For example, in the case where the response signal is a sound, the user determines which of the acoustic apparatuses (for example, the 1 st acoustic apparatus 3A) that is outputting the sound is. Then, the user selects on the screen the arrangement in which the audio device (for example, the 1 st audio device 3A) that is outputting the audio is present at any of the arrangement places a1 to a 4. As shown in fig. 7, on the screen of the display unit 42, the audio devices 3A to 3F each display one line. The user selects any one of the arrangement locations a1 to a4 for each of the 1 st audio device 3A, the 2 nd audio device 3B, the 3 rd audio device 3C, and the 4 th audio device 3D from a pull-down list or the like.

The receiving unit 44 receives the center position. More specifically, the reception unit 44 receives the center position by a user touching any one of the layout diagrams shown below the screen in fig. 7.

The position specification processing unit 45 assigns the arrangement received by the receiving unit 44 to any one of the plurality of acoustic devices 3A to 3F included in the arrangement data, and causes the storage unit 41 to store the arrangement assigned to the arrangement data. That is, the position specification processing unit 45 allocates the placement locations a1 to a4 of each of the 1 st, 2 nd, 3C, and 4 th acoustic devices 3A, 3B, 3C, and 3D received by the receiving unit 44 to the columns of the placement shown in fig. 5. Then, the storage unit 41 stores the arrangement data associated with the arrangement locations a1 to a4 assigned to each of the 1 st, 2 nd, 3C, and 4 th audio devices 3A, 3B, 3C, and 3D.

The channel allocation unit 46 allocates a channel to each of the audio apparatuses (for example, the 1 st audio apparatus 3A, the 2 nd audio apparatus 3B, the 3 rd audio apparatus 3C, and the 4 th audio apparatus 3D) to be allocated, that is, each of the plurality of audio apparatuses (for example, the 1 st audio apparatus 3A, the 2 nd audio apparatus 3B, the 3 rd audio apparatus 3C, and the 4 th audio apparatus 3D), in correspondence with the center position received by the reception unit 44. When the 1 st center position, which is the center position newly received by the receiving unit 44, is different from the 2 nd center position, which is the center position stored in the storage unit 41, the channel allocation unit 46 allocates a channel corresponding to the 1 st center position to each of the plurality of acoustic devices (for example, the 1 st acoustic device 3A, the 2 nd acoustic device 3B, the 3 rd acoustic device 3C, and the 4 th acoustic device 3D). The center position received by the receiving unit 44 is stored in the storage unit 41. Preferably, the information processing apparatus 4 is configured to transmit the content of the channel to the AV receiver 2.

The center position received by the receiving unit 44 is stored in the storage unit 41. In the information processing system 10 of the present embodiment, as shown in fig. 2, the place where the television 5 is disposed can be set as the center position in the living room r1, for example. In the information processing system 10 of the present embodiment, the information processing apparatus 4 sets the channel of the audio device disposed on the left side toward the center position as the channel FL. The AV receiver 2 sets the channel of the audio device disposed on the right side toward the center position as the channel FR. Further, when the center position is set to the front, the channel of the audio equipment disposed at the rear left is set to the channel SL. Further, the channel of the audio equipment disposed on the rear right side is set as the channel SR.

Further, the user can operate the information processing apparatus 4 to set the position of the television 5 as the center position. By storing the center position in the storage section 41, the user does not need to input the center position again when the information processing system 10 is used next time. As a result, the time for setting the channel can be shortened in the information processing device 4 and the information processing system 10 according to the present embodiment.

In the information processing apparatus 4 and the information processing system 10 according to the present embodiment, it is possible to specify the acoustic devices (for example, the 1 st acoustic device 3A, the 2 nd acoustic device 3B, the 3 rd acoustic device, and the 4 th acoustic device 3D) disposed in the space desired by the user, for example, the living room r 1. In addition, in the information processing apparatus 4 and the information processing system according to the present embodiment, it is possible to detect the arrangement of the specified acoustic devices (for example, the 1 st acoustic device 3A, the 2 nd acoustic device 3B, the 3 rd acoustic device 3C, and the 4 th acoustic device 3D). As a result, the information processing apparatus 4 and the information processing system 10 according to the present embodiment can specify the arrangement of the acoustic devices 3A to 3F more easily. In the information processing apparatus 4 and the information processing system 10 according to the present embodiment, by specifying the center position, it is possible to appropriately perform channel setting of the specified acoustic device (for example, the 1 st acoustic device 3A, the 2 nd acoustic device 3B, the 3 rd acoustic device 3C, and the 4 th acoustic device 3D).

However, the information processing apparatus 4 can realize the various functions described above by an information processing program executed by the CPU40 existing in the information processing apparatus 4. By executing the information processing program, the arrangement of the acoustic devices 3A to 3F can be determined more easily.

Here, the operation in the information processing system 10 will be described with reference to fig. 8 to 11. Fig. 8 is a flowchart showing the operation of the information processing system 10. As a precondition, the storage unit 24 of the AV receiver 2 stores data in which each of the plurality of audio devices 3A to 3F is associated with an IP address and a MAC address corresponding to each of the plurality of audio devices 3A to 3F. Further, the information processing apparatus 4 can receive the data. As shown in fig. 2, the user carries the information processing device 4 and operates the information processing device 4 at the center of the living room r 1. Further, the user can view the correspondence table shown in fig. 5 on the screen. Further, the center position desired by the user is set as the position where the television 5 is arranged.

The information processing system 10 performs estimation processing for estimating an acoustic device that is an estimation target among the plurality of acoustic devices 3A to 3F (step S11). The information processing system 10 performs position specification processing for the acoustic devices determined to be the objects of estimation (yes in step S12), for example, the 1 st acoustic device 3A, the 2 nd acoustic device 3B, the 3 rd acoustic device 3C, and the 4 th acoustic device 3D, among the plurality of acoustic devices 3A to 3F (step S13). When the 1 st, 2 nd, 3C, and 4 th audio devices 3A, 3B, 3C, and 3D are arranged, the information processing system 10 receives the center position and performs the channel setting process (step S14).

The information processing system 10 ends the process for the 5 th audio device 3E and the 6 th audio device 3F, for example, which are not determined to be the objects of estimation (no in step S12) (transition to RETURN).

The estimation process in the information processing system 10 will be explained. Fig. 9 is a flowchart showing operations of the information processing device 4 and the acoustic apparatuses 3A to 3F in the estimation process of the information processing system 10. The user operates the application on the screen to set the information processing system 10 to the processing start state. The output unit 43 of the information processing apparatus 4 transmits a start notification to the plurality of acoustic devices 3A to 3F via the AV receiver 2 (step S21). At this time, the information processing device 4 sets a timeout (for example, 5 seconds) for stopping the start notification in advance. Each of the plurality of acoustic devices 3A to 3F receives the start notification (step S22). Each of the plurality of acoustic devices 3A to 3F has a microphone 36 in a sound pickup enabled state. Then, each of the plurality of acoustic apparatuses 3A to 3F notifies the information processing device 4 of a sound pickup preparation notification indicating that the microphone 36 has been set to a sound pickup enabled state via the AV receiver 2 (step S23). When the information processing apparatus 4 receives the sound reception preparation notification (step S24), the information processing apparatus 4 transmits an estimation signal (test sound) from the output unit 43 (step S25).

The 1 st, 2 nd, 3C, and 4 th acoustic devices 3A, 3B, 3C, and 3D arranged in the living room r1 among the plurality of acoustic devices 3A to 3F pick up the estimation signal (step S26). The 1 st, 2 nd, 3C, and 4 th audio devices 3A, 3B, 3C, and 3D transmit an estimation signal reception notification indicating that the estimation signal has been picked up directly or via the AV receiver 2 to the information processing apparatus 4 (step S27). The information processing apparatus 4 receives the estimation signal reception notification (step S28). At this time, the information processing apparatus 4 causes the display unit 42 to display the 1 st, 2 nd, 3C, and 4 th audio devices 3A, 3B, 3C, and 3D that have received the estimation signal. The information processing apparatus 4 stops the inference signal according to a time-out or a manual operation by the user (step S29). The information processing apparatus 4 notifies the plurality of acoustic devices 3A to 3F of the end notification via the AV receiver 2 (step S30). The plurality of acoustic apparatuses 3A to 3F receive the end notification (step S31), and stop the sound pickup state in the information processing system 10 of the microphone 36.

On the other hand, the 5 th and 6 th acoustic devices 3E and 3F disposed in the bedroom r2 do not pick up the estimation signal. The 5 th and 6 th audio devices 3E and 3F notify the information processing apparatus 4 via the AV receiver 2 that the estimated signal is not sound-collected. Further, since the information processing apparatus 4 specifies only the acoustic devices that have picked up the estimated signal, the acoustic devices that have not picked up the sound (here, the 5 th acoustic device 3E and the 6 th acoustic device 3F) may not notify the information processing apparatus 4 that the sound has not been picked up.

In the information processing method according to the present embodiment, only the acoustic device that has received the estimation signal is targeted for estimation, and thus the user can appropriately and easily specify the acoustic device in the space. As a result, the information processing method according to the present embodiment can more easily specify the arrangement of the acoustic devices.

Next, the position determination process is described with reference to fig. 10. Fig. 10 is a flowchart showing operations of the information processing apparatus 4 and the acoustic device (here, the 1 st acoustic device 3A) in the position determination process of the information processing system 10. The user selects any one of the 1 st, 2 nd, 3C, and 4 th audio devices 3A, 3B, 3C, and 3D shown in fig. 7 (step S41). In more detail, the user selects a section (section) to be set or a line of the acoustic apparatus. The reception unit 44 receives an input of, for example, the 1 st acoustic device 3A selected by the user (step S42). The output unit 43 of the information processing device 4 transmits the detection signal to the 1 st acoustic device 3A received by the receiving unit 44 via the AV receiver 2 (step S43). The 1 st acoustic device 3A receives the detection signal (step S44), and outputs a response signal (step S45).

Here, the user can determine at which location the 1 st acoustic device 3A is disposed, based on the response signal (e.g., the buzzer sound). In the information processing system 10 of the present embodiment, the 1 st acoustic device 3A is disposed on the left side of the television 5. In other words, the 1 st acoustic device 3A is disposed on the front left side of the user. The user operates the application on the screen to select the configuration place a1 from, for example, a pull-down list so that the configuration of the 1 st audio device 3A becomes the configuration place a1 (step S46). The reception unit 44 of the information processing device 4 receives the arrangement of the 1 st acoustic device 3A at the arrangement location a1 (step S47).

The position specification processing unit 45 associates the 1 st acoustic device 3A with the placement location a1 (step S48). The storage unit 41 stores data in which the 1 st acoustic device 3A is associated with the arrangement location a1 (step S49).

In the information processing method of the present embodiment, the user can easily specify the acoustic device that has output the buzzer sound, and can specify the respective configurations using the information processing apparatus 4. That is, in the information processing method of the present embodiment, the position of the acoustic device that is the target of estimation among the plurality of acoustic devices 3A to 3F can be easily specified. As a result, the information processing method according to the embodiment can specify the arrangement of the acoustic device more easily.

The channel setting process is explained with reference to fig. 11. Fig. 11 is a flowchart showing the operation of information processing apparatus 4 in the channel setting process of information processing system 10. As a precondition, a temporary center position (2 nd center position) is stored in advance in the storage unit 41.

The reception unit 44 receives the center position selected by the user (step S51). The center position is a position where the television 5 is arranged as shown in fig. 2. As shown in fig. 2, in living room r1, the side of television 5 is defined as the front, the side of the wall facing the front where television 5 is disposed is defined as the rear, and the left and right sides are defined with television 5 as the center toward the front. The center position received by the receiving unit 44 is stored as the 1 st center position in the storage unit 41 (step S52). When the 1 st center position is different from the 2 nd center position (no in step S53), the channel assigning unit 46 assigns a channel to each of the 1 st audio device 3A, the 2 nd audio device 3B, the 3 rd audio device 3C, and the 4 th audio device 3D (step S54). The channel assigning section 46 stores the 1 st center position as the 2 nd center position in the storage section 41 (step S55).

In the information processing method according to the present embodiment, the center position is input, and channels of the acoustic devices to be estimated, for example, the 1 st acoustic device 3A, the 2 nd acoustic device 3B, the 3 rd acoustic device 3C, and the 4 th acoustic device 3D are allocated. As a result, in the information processing of the present embodiment, the information processing apparatus 4 can appropriately and efficiently set the channels of a plurality of audio devices.

In addition, the information processing apparatus 4 can record video or a photograph of a space using an existing camera function, and analyze the video data or the photograph to determine the configuration of the plurality of acoustic devices 3A to 3F.

Further, the response signal may be a sound. This makes it easier for the user to detect the location of the placement. In the information processing system 10 of the present embodiment, in the case where the response signal is a sound, the user can more easily specify the acoustic apparatus.

Description of the reference symbols

10. information processing system

3A, 3B, 3C, 3D, 3E, 3F. acoustic equipment

4. information processing apparatus

41. storage section

42. display part

43. output section

44. acceptance part

45 DEG

46. channel allocation section

23页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种定位辅助数据的发送方法、设备及系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!