Determining lighting design preferences in augmented and/or virtual reality environments

文档序号:1866326 发布日期:2021-11-19 浏览:29次 中文

阅读说明:本技术 在增强和/或虚拟现实环境中确定照明设计偏好 (Determining lighting design preferences in augmented and/or virtual reality environments ) 是由 F·皮尔曼 于 2020-03-23 设计创作,主要内容包括:一种系统被配置成在虚拟和/或增强现实环境中呈现多个不同的照明设计。多个不同的照明设计中的每一个包括照明设备(31-37)的照明条件和/或外观。虚拟和/或增强现实环境至少包括演示照明设计中的第一照明设计的第一空间区域(42)和演示照明设计中的第二照明设计的第二空间区域(43)。该系统还被配置成确定用户在虚拟和/或增强现实环境中采用的路径(71),并且基于所确定的路径来确定用户是更喜欢第一照明设计而不是第二照明设计,还是更喜欢第二照明设计而不是第一照明设计。(A system is configured to present a plurality of different lighting designs in a virtual and/or augmented reality environment. Each of the plurality of different lighting designs comprises a lighting condition and/or appearance of the lighting device (31-37). The virtual and/or augmented reality environment includes at least a first spatial region (42) of a first one of the demonstration lighting designs and a second spatial region (43) of a second one of the demonstration lighting designs. The system is further configured to determine a path (71) taken by the user in the virtual and/or augmented reality environment, and determine whether the user prefers the first lighting design over the second lighting design or the second lighting design over the first lighting design based on the determined path.)

1. A system (1, 21) for presenting a plurality of different lighting designs in a virtual and/or augmented reality environment (41) and determining a user's preferences for the plurality of different lighting designs, each of the plurality of different lighting designs comprising lighting conditions and/or appearances of lighting devices (31-37), the system (1, 21) comprising:

at least one input interface (8, 9);

at least one output interface (4, 9, 28, 29); and

at least one processor (5, 25) configured to:

presenting a plurality of different lighting designs in the virtual and/or augmented reality environment (41) using the at least one output interface (9, 28, 29), the virtual and/or augmented reality environment (41) comprising at least a first spatial region (42) demonstrating a first lighting design of the plurality of lighting designs and a second spatial region (43) demonstrating a second lighting design of the plurality of lighting designs,

using the at least one input interface (8, 9) to determine a path (71) taken by the user in the virtual and/or augmented reality environment (41), and

determining, based on the determined path (71), whether the user prefers the first lighting design over the second lighting design or prefers the second lighting design over the first lighting design.

2. The system (1, 21) as claimed in claim 1, wherein the at least one processor (5, 25) is configured to use the at least one output interface (9, 28, 29) to present video pixels corresponding to the plurality of different lighting designs superimposed on a user's view of the real world, and to determine the path (71) based on a plurality of physical locations of the user, the plurality of physical locations being obtained using the at least one input interface.

3. The system (1, 21) according to claim 1, wherein the at least one processor (5) is configured to:

generating a virtual reality environment comprising the first spatial region (42) and the second spatial region (43),

displaying a first image representing a first view of the virtual reality environment to the user using the at least one output interface (9), the first image comprising a first plurality of video pixels,

receiving input from the user using the at least one input interface (9),

determining, based on the user input, a second view of the virtual reality environment, the second view from a different location in the virtual reality environment than the first view,

displaying a second image representing the second view of the virtual reality environment to the user using the at least one output interface (9), the second image comprising a second plurality of video pixels, an

Determining the path (71) taken by the user based on the user input.

4. The system (1, 21) as claimed in claim 3, wherein the at least one processor (5) is configured to generate the virtual reality environment in dependence on a specified activity type and/or a specified activity level and/or a specified indoor design and/or a specified daylight characteristic.

5. The system (1, 21) as claimed in claim 4, wherein the at least one processor (5) is configured to determine the specified daylight characteristic based on a specified season of the year and/or time of day and/or weather conditions.

6. The system (1, 21) as claimed in claim 1 or 2, wherein the at least one processor (5, 25) is configured to include the first lighting design or the second lighting design in a lighting plan according to the determined user preference and to output the lighting plan using the at least one output interface (4).

7. The system (1, 21) according to claim 1 or 2, wherein the at least one processor (5, 25) is configured to:

selecting a third lighting design and a fourth lighting design based on the determined user preferences, the third lighting design and the fourth lighting design not yet being demonstrated in the virtual and/or augmented reality environment (41),

presenting a further plurality of different lighting designs in a further virtual and/or augmented reality environment using the at least one output interface (9, 28, 29), the further virtual and/or augmented reality environment comprising at least a first further spatial region demonstrating the third lighting design and a second further spatial region demonstrating the fourth lighting design,

using the at least one input interface (8, 9) to determine a further path taken by the user in the further virtual and/or augmented reality environment, an

Determining whether the user prefers the third lighting design over the fourth lighting design or prefers the fourth lighting design over the third lighting design based on the determined additional paths.

8. The system (1, 21) according to claim 1 or 2, wherein the at least one processor (5, 25) is configured to determine that the user prefers the first lighting design over the second lighting design when it is determined that the user enters the first spatial region (42) and does not enter the second spatial region (43), and to determine that the user prefers the second lighting design over the first lighting design when it is determined that the user enters the second spatial region (43) and does not enter the first spatial region (42).

9. The system (1, 21) as defined in claim 1 or 2, wherein the path indicates how long the user spent in the first and second spatial regions (42, 43), and the at least one processor (5, 25) is configured to determine whether the user prefers the first lighting design over the second lighting design or the second lighting design over the first lighting design based on how long the user spent in the first and second spatial regions (42, 43).

10. The system (1, 21) as claimed in claim 1 or 2, wherein the at least one processor (5, 25) is configured to receive a further input from the user using the at least one input interface (9) and to adjust the first lighting design and/or the second lighting design based on the further input.

11. The system (1, 21) as defined in claim 10, wherein the path indicates whether the user adjusted the first lighting design and/or the second lighting design, and the at least one processor (5, 25) is configured to determine whether the user prefers the first lighting design over the second lighting design or the second lighting design over the first lighting design based on whether the user adjusted the first lighting design and/or the second lighting design.

12. The system (1, 21) as claimed in claim 1 or 2, wherein the first and second lighting designs comprise different light levels, different light colors, different lamp appearances, different luminaire appearances, different lamp types, different luminaire positions and/or different luminaire orientations.

13. A method of presenting a plurality of different lighting designs in a virtual and/or augmented reality environment and determining a user's preferences for the plurality of different lighting designs, each of the plurality of different lighting designs comprising lighting conditions and/or appearances of a lighting device, the method comprising:

presenting (101, 123) a plurality of different lighting designs in the virtual and/or augmented reality environment, the virtual and/or augmented reality environment comprising at least a first spatial region demonstrating a first lighting design of the plurality of lighting designs and a second spatial region demonstrating a second lighting design of the plurality of lighting designs;

determining (103, 127, 145) a path taken by the user in the virtual and/or augmented reality environment; and

determining (105) whether the user prefers the first lighting design over the second lighting design or the second lighting design over the first lighting design based on the determined path.

14. A computer program or suite of computer programs comprising at least one software code portion or a computer program product storing at least one software code portion, the software code portion being configured for enabling the method of claim 13 to be performed when run on a computer system.

Technical Field

The present invention relates to a system and method for presenting and determining user preferences for a plurality of different lighting designs in a virtual and/or augmented reality environment, each of the plurality of different lighting designs comprising lighting conditions and/or appearance of a lighting device.

The invention also relates to a computer program product enabling a computer system to perform such a method.

Background

Lighting design is an important aspect of creating a good lighting atmosphere. In addition to the lamps and luminaire types, the actual position and orientation of the luminaires in combination with the interior design determines the overall atmosphere.

For larger lighting projects, lighting designers help the customer choose the correct design. One standard way of working is: lighting designers show schematic plan views, and sometimes illustrations of suggested solutions. One problem with this mode of operation is that: the customer does not really experience the feeling of having this particular solution.

One direction in which people are interested is the use of Virtual Reality (VR). With virtual reality, people can greatly improve the environmental experience. A new working mode is as follows: the lighting designer will convert its schematic plan view for the VR environment, thereby helping the customer make decisions.

For example, in the thesis of the 16 th international conference on virtual reality Building applications held in hong kong from 11 to 13/12/2016, the thesis of worwan Natephra et al, "Integrating Building Information Modeling and Game Engine for Indoor Lighting Visualization (Integrating Building Information Modeling and Game Engine for Indoor Lighting Visualization)" discloses a virtual reality system that simulates daylight and artificial light of a design Building and visualizes the real Lighting environment using a head mounted display. When the user walks in the design space, he can control the movement of the user avatar (avatar). The system also allows the user to readjust the design parameters if the light output is not satisfactory, for example by customizing the lighting fixture, bulb type, lighting intensity and color temperature. The system also allows the user to adjust the time to allow him to observe dynamic sunlight.

However, although the system disclosed herein is beneficial to lighting designers, it is not suitable for use by customers, since they need to make many design decisions that are typically made by lighting designers.

Disclosure of Invention

It is a first object of the present invention to provide a system that a customer of a lighting designer can use using an augmented and/or virtual reality environment to determine a lighting design that meets his requirements.

It is a second object of the present invention to provide a method that enables a customer of a lighting designer to use an augmented and/or virtual reality environment to determine a lighting design that meets his requirements.

In a first aspect of the invention, a system for presenting a plurality of different lighting designs in a virtual and/or augmented reality environment, each of the plurality of different lighting designs comprising lighting conditions and/or appearances of a lighting device, and determining a user's preferences for the plurality of different lighting designs, comprises at least one input interface, at least one output interface, and at least one processor.

The at least one processor is configured to: using the at least one output interface to present a plurality of different lighting designs in the virtual and/or augmented reality environment, the virtual and/or augmented reality environment including at least a first spatial region demonstrating a first lighting design of the plurality of lighting designs and a second spatial region demonstrating a second lighting design of the plurality of lighting designs; determining, using the at least one input interface, a path taken by the user in the virtual and/or augmented reality environment; and determining whether the user prefers the first lighting design over the second lighting design or the second lighting design over the first lighting design based on the determined path.

For example, the first and second lighting designs may comprise different light levels, different light colors, different lamp appearances, different luminaire appearances, different lamp types, different luminaire positions, and/or different luminaire orientations.

By presenting different lighting designs in different spatial regions of an augmented reality and/or virtual reality environment, and determining a preference of a lighting designer's customer for one lighting design over another based on the path the customer takes in the augmented reality and/or virtual reality environment (e.g., which room the user visits), a lighting design that meets the customer's needs can be determined without requiring the customer to make many decisions. An environment in which augmented reality and virtual reality are combined is also referred to as a mixed reality environment.

The at least one processor may be configured to present, using the at least one output interface, video pixels corresponding to the plurality of different lighting designs (superimposed on a user's view of the real world), and determine the path based on a plurality of physical locations of the user, the plurality of physical locations being obtained using the at least one input interface. The augmented reality environment can be used to demonstrate relatively easily what the lighting design is under the current conditions. In an augmented reality environment, the physical location of the user may be used to determine the path of the user.

The at least one processor may be configured to: generating a virtual reality environment comprising the first spatial region and the second spatial region; displaying, using the at least one output interface, a first image representing a first view of the virtual reality environment to the user, the first image comprising a first plurality of video pixels; receiving input from the user using the at least one input interface, determining a second view of the virtual reality environment based on the user input, the second view from a different location in the virtual reality environment than the first view; displaying, using the at least one output interface, a second image representing the second view of the virtual reality environment to the user, the second image comprising a second plurality of video pixels; and determining the path taken by the user based on the user input. In virtual reality environments that are not mixed reality environments, users typically provide navigation commands to navigate through the virtual reality environment. Such a virtual reality environment makes it easier to simulate activities, daylight features and indoor designs that are not currently present in the environment for which the lighting is intended, e.g. buildings that have not yet been built.

The at least one processor may be configured to generate the virtual reality environment according to a specified activity type and/or a specified activity level and/or a specified indoor design and/or a specified daylight characteristic. This allows the user to simulate an environment similar to the environment for which the lighting is intended, without the environment being available. For example, the at least one processor may be configured to determine the specified daylight characteristic based on a specified season of the year and/or time of day and/or weather conditions.

The at least one processor may be configured to include the first lighting design or the second lighting design in a lighting plan according to the determined user preferences and output the lighting plan using the at least one output interface. The lighting plan typically indicates the location of the lighting devices and other attributes of the lighting design on a map. For example, the lighting plan may be provided to a supplier and/or installer of the lighting device.

The at least one processor may be configured to: selecting a third lighting design and a fourth lighting design based on the determined user preferences, the third lighting design and the fourth lighting design not yet being demonstrated in the virtual and/or augmented reality environment; presenting a further plurality of different lighting designs in a further virtual and/or augmented reality environment using the at least one output interface, the further virtual and/or augmented reality environment comprising at least a first further spatial region demonstrating the third lighting design and a second further spatial region demonstrating the fourth lighting design; determining, using the at least one input interface, a further path taken by the user in the further virtual and/or augmented reality environment; and determining whether the user prefers the third lighting design over the fourth lighting design or prefers the fourth lighting design over the third lighting design based on the determined additional paths. This allows the system to develop its knowledge of user preferences. The third and fourth lighting designs are generally different from the non-preferred lighting designs and may be variations of the preferred lighting design.

The at least one processor may be configured to determine that the user prefers the first lighting design over the second lighting design when it is determined that the user entered the first spatial region and did not enter the second spatial region, and determine that the user prefers the second lighting design over the first lighting design when it is determined that the user entered the second spatial region and did not enter the first spatial region. If the user is able to obtain some sense of lighting in a spatial area, e.g. through glass or through an open door, the decision whether he enters the spatial area is usually a good indication whether he likes the lighting design.

The path may indicate how long the user spent in the first and second spatial regions, and the at least one processor may be configured to determine whether the user prefers the first lighting design over the second lighting design or the second lighting design over the first lighting design based on how long the user spent in the first and second spatial regions. The time a user spends in a spatial area is often a good indication of whether he likes the lighting design, especially when the user walks around in the spatial area.

The at least one processor may be configured to receive further input from the user using the at least one input interface and adjust the first lighting design and/or the second lighting design based on the further input. When a user sees a lighting design that he likes, but thinks it can still improve, it is beneficial to allow him to adjust the lighting design. Preferably, the lighting design determined by the user to be preferred includes these adjustments.

The path may indicate whether the user adjusted the first lighting design and/or the second lighting design, and the at least one processor may be configured to determine whether the user prefers the first lighting design over the second lighting design or the second lighting design over the first lighting design based on whether the user adjusted the first lighting design and/or the second lighting design. If the user adjusts a lighting design, this is usually a good indication that he likes the lighting design.

In a second aspect of the invention, a method of presenting a plurality of different lighting designs (each of which comprises a lighting condition and/or appearance of a lighting device) in a virtual and/or augmented reality environment and determining a user's preference for the plurality of different lighting designs comprises: presenting a plurality of different lighting designs in the virtual and/or augmented reality environment, the virtual and/or augmented reality environment including at least a first spatial region demonstrating a first lighting design of the plurality of lighting designs and a second spatial region demonstrating a second lighting design of the plurality of lighting designs; determining a path taken by the user in the virtual and/or augmented reality environment; and determining whether the user prefers the first lighting design over the second lighting design or the second lighting design over the first lighting design based on the determined path. The method may be performed by software running on a programmable device. The software may be provided as a computer program product.

Furthermore, a computer program for performing the methods described herein, and a non-transitory computer-readable storage medium storing the computer program are provided. For example, the computer program may be downloaded or uploaded to existing devices, or stored at the time of manufacture of the systems.

The non-transitory computer-readable storage medium stores at least one software code portion configured to perform executable operations, when executed or processed by a computer, to present a plurality of different lighting designs in a virtual and/or augmented reality environment, and to determine a user's preference for the plurality of different lighting designs, each of the plurality of different lighting designs including a lighting condition and/or appearance of a lighting device.

The executable operations include: presenting a plurality of different lighting designs in the virtual and/or augmented reality environment, the virtual and/or augmented reality environment including at least a first spatial region demonstrating a first lighting design of the plurality of lighting designs and a second spatial region demonstrating a second lighting design of the plurality of lighting designs; determining a path taken by the user in the virtual and/or augmented reality environment; and determining whether the user prefers the first lighting design over the second lighting design or the second lighting design over the first lighting design based on the determined path.

As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as an apparatus, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit," module "or" system. The functions described in this disclosure may be implemented as algorithms executed by the processor/microprocessor of a computer. Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer-readable media having computer-readable program code embodied in (e.g., stored on) the media.

Any combination of one or more computer-readable media may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to, the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.

A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein (e.g., in baseband or as part of a carrier wave). Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. The computer readable signal medium may be any such computer readable medium: which is not a computer-readable storage medium and which can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.

Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber, cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java (TM), Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).

Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor, particularly a microprocessor or Central Processing Unit (CPU), of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer, other programmable data processing apparatus, or other devices, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.

These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.

The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

Drawings

These and other aspects of the invention will be apparent from and elucidated further by way of example with reference to the accompanying drawings, in which:

FIG. 1 is a block diagram of a first embodiment of a system;

FIG. 2 is a block diagram of a second embodiment of a system;

FIG. 3 depicts an example of a map of an augmented and/or virtual reality environment;

FIG. 4 illustrates an example of spatial regions in a physical environment;

FIG. 5 shows an example of a lighting design added to the spatial region of FIG. 4;

FIG. 6 shows an example of a user entering the spatial region of FIG. 5;

FIG. 7 illustrates an example of an augmented and/or virtual reality environment being displayed on a mobile phone;

FIG. 8 depicts an example of a path taken by a user on the map of FIG. 3;

FIG. 9 is a flow chart of a first embodiment of a method;

FIG. 10 is a flow chart of a second embodiment of a method;

FIG. 11 is a flow chart of a third embodiment of a method;

FIG. 12 is a flow chart of a fourth embodiment of a method; and

FIG. 13 is a block diagram of an exemplary data processing system for performing the methods of the present invention.

Corresponding elements in the drawings are denoted by the same reference numerals.

Detailed Description

Fig. 1 illustrates a first embodiment of a system for presenting a plurality of different lighting designs in a virtual and/or augmented reality environment and determining user preferences for the plurality of different lighting designs. Each of the plurality of different lighting designs comprises a lighting condition and/or appearance of the lighting device. In the embodiment of fig. 1, the system is a mobile telephone 1. The mobile phone 1 is connected to a wireless LAN access point 13. The wireless LAN access point 13 is connected to the internet (backbone) 17. The server 19 is also connected to the internet (backbone) 17.

The mobile phone 1 comprises a receiver 3, a transmitter 4, a processor 5, a memory 7, a camera 8 and a display 9. The processor 5 is configured to present a plurality of different lighting designs in the virtual reality environment using the display 9. The virtual reality environment includes at least a first spatial region demonstrating a first lighting design of the plurality of lighting designs and a second spatial region demonstrating a second lighting design of the plurality of lighting designs.

For example, the first and second lighting designs may comprise different light levels, different light colors, different lamp appearances, different luminaire appearances, different lamp types, different luminaire positions, and/or different luminaire orientations. The first and second lighting designs are obtained from the internet server 19 using the receiver 3 and the transmitter 4 and may be pre-selected by a designer.

The processor 5 is further configured to determine a path taken by the user in the virtual reality environment, and determine, based on the determined path, whether the user prefers the first lighting design over the second lighting design or the second lighting design over the first lighting design.

In this way, a lighting designer's customer, or one of its employees (if the customer is a company), may move through a virtual and/or augmented reality environment that presents various lighting designs. Which lighting design a customer prefers and may be included in the lighting plan depends on the path along which the customer moves. Closing the door may be interpreted as an unsatisfactory sign, while moving into the room may be interpreted as a satisfactory sign. The indoor design may be matched to the customer's design and the usage of the space presented in the environment (e.g., people walking/working around) may be matched to the customer's intended usage. The customer may be able to control light settings, such as light levels and colors, and this input may be used to generate further lighting designs.

For example, preferences for lighting design may be measured by a combination of the time a user spends in a particular room (more time indicating preferences), how fast the user walks through the room before entering the next room (fast being interpreted as negative), whether the user uses the lighting control (virtually) (the behavior of the touch system is positive), and settings used on the lighting control.

In the embodiment of fig. 1, the mobile phone 1 provides two modes:

a) virtual reality

b) Mixed reality (i.e., a combination of virtual reality and augmented reality).

To provide mode a), the processor 5 is configured to: generating a virtual reality environment comprising a first spatial region and a second spatial region; displaying a first image representing a first view of the virtual reality environment to a user using the display 9; receiving input from a user using the (touch screen) display 9; and displaying a second image representing a second view of the virtual reality environment to the user using the display 9 when the user's real or virtual position in the virtual reality environment changes. The second view is from a different location in the virtual reality environment than the first view. The first and second images comprise a first and second plurality of video pixels, respectively. The processor 5 is configured to determine the path taken by the user based on the user input.

In the embodiment of fig. 1, in mode a) the processor 5 is configured to generate a virtual reality environment according to a specified activity type and/or a specified activity level and/or a specified indoor design and/or specified daylight characteristics. For example, a user may be able to change room properties and activities to experience how the lighting looks for different activities that may occur. In normal life, some rooms may be multi-functional, for example a person may eat in a living room at home, but the person may also hold a party in the room.

The user may also be allowed to change the color of furniture and/or walls in the room. The specified daylight characteristic may be determined based on a specified season of the year and/or time of day and/or weather conditions. Different seasons bring different colors of sunlight. Furthermore, on cloudy days, sunlight is sometimes very diffuse, in which case the user may prefer some colors indoors.

To provide mode b), the processor 5 is configured to present video pixels corresponding to a plurality of different lighting designs (superimposed on the user's view of the real world), and to determine a path based on a plurality of physical locations of the user. The physical location is obtained using input from one or more sensors (e.g., GPS or other location sensors), accelerometers (not shown), and/or the camera 8. For example, the physical location may be determined using RF beacons. In mode b), the images captured by the camera 8 are displayed in real time on the display 9, and depending on the position of the user, one or more lighting designs are superimposed on these images.

In the embodiment of fig. 1, the mobile phone 1 provides two modes. In an alternative embodiment, the mobile telephone 1 provides only one of these two modes.

In the embodiment of the mobile phone 1 shown in fig. 1, the mobile phone 1 comprises a processor 5. In an alternative embodiment, the mobile telephone 1 comprises a plurality of processors. The processor 5 of the mobile phone 1 may be a general purpose processor, such as a processor from an ARM or high-pass or a dedicated processor. For example, the processor 5 of the mobile telephone 1 may run an android or iOS operating system. For example, the display 9 may comprise an LCD or OLED display panel. The display 9 may be a touch screen, for example. For example, the processor 5 may use the touch screen to provide a user interface. The memory 7 may comprise one or more memory units. For example, the memory 7 may comprise a solid-state memory.

For example, the receiver 3 and the transmitter 4 may communicate with the wireless LAN access point 12 using one or more wireless communication technologies such as Wi-Fi (IEEE 802.11). In alternative embodiments, multiple receivers and/or multiple transmitters are used instead of a single receiver and a single transmitter. In the embodiment shown in fig. 1, a separate receiver and a separate transmitter are used. In an alternative embodiment, the receiver 3 and the transmitter 4 are combined into a transceiver. For example, the camera 8 may comprise a CMOS or CCD sensor. The mobile telephone 1 may include other components typical for mobile telephones, such as a battery and a power supply connector. The invention may be implemented using computer programs running on one or more processors.

In the embodiment of fig. 1, the system is a mobile phone. In an alternative embodiment, the system of the present invention is a different device. In the embodiment of fig. 1, the system of the present invention comprises a single device. In an alternative embodiment, the system of the present invention includes a plurality of devices.

Fig. 2 shows a second embodiment of the system: mixed reality glasses 21. The mixed reality glasses 21 include two glasses 23 and 24, a processor 25, and two projectors 28 and 29. The projector 28 projects an image on the glass 23. The projector 29 projects an image on the glass 24. The mixed reality glasses 21 also include a receiver 3, a transmitter 4, a memory 7 and a camera 8 similar to the mobile phone of fig. 1.

Processor 25 is configured to present a plurality of different lighting designs in a mixed reality environment (i.e., a combination of virtual and augmented reality environments) using projectors 28 and 29. The mixed reality environment includes at least a first spatial region demonstrating a first lighting design of the plurality of lighting designs and a second spatial region demonstrating a second lighting design of the plurality of lighting designs. The spatial region may be a real room of a building in which the user is located.

The processor 25 is configured to determine a path taken by the user in the mixed reality environment and determine whether the user prefers the first lighting design over the second lighting design or the second lighting design over the first lighting design based on the determined path. The processor 25 is configured to provide mode b), which has been described in relation to fig. 1. The physical location of the user may be determined using the camera 8 (e.g., by applying object recognition), using the receiver 3 (e.g., based on received RF beacons), and/or by using one or more other sensors (e.g., an accelerometer).

In the embodiment of the mixed reality glasses 21 shown in fig. 2, the mixed reality glasses 21 include a processor 25. In an alternative embodiment, the mixed reality glasses 21 include multiple processors. The processor 25 of the mixed reality glasses 21 may be a general purpose processor or a dedicated processor. The processor 25 of the mixed reality glasses 21 may run, for example, a Unix-based operating system. Memory 27 may include one or more memory units. For example, the memory 27 may comprise a solid state memory. For example, the camera 8 may comprise a CMOS or CCD sensor. For example, projectors 28 and 29 may be (e.g., DLP) pico projectors for near-eye display.

For example, the receiver 3 and the transmitter 4 may communicate with the wireless LAN access point 13 or the mobile communication network using one or more wireless communication technologies. In alternative embodiments, multiple receivers and/or multiple transmitters are used instead of a single receiver and a single transmitter. In the embodiment shown in fig. 2, a separate receiver and a separate transmitter are used. In an alternative embodiment, the receiver 3 and the transmitter 4 are combined into a transceiver. The mixed reality glasses 21 may include other components typical for mobile devices, such as a battery. The invention may be implemented using computer programs running on one or more processors.

FIG. 3 depicts an example of a map of an augmented and/or virtual reality environment. The map shows a floor 41 with six rooms: rooms 42-47. Each room has a different lighting design. In the example of fig. 3, these lighting designs are implemented with one or two lighting devices per room. The first lighting design is created by the lighting devices 31 in the room 42. The second lighting design is created by the lighting devices 32 in the room 43. The third lighting design is created by the lighting devices 33 in the room 44.

A fourth lighting design is created by the lighting devices 34 in the room 45. A fifth lighting design is created by the lighting devices 35 in the room 46. A sixth lighting design is created by the lighting devices 36 and 37 in the room 41. The lighting devices 31 and 33 use the same type of luminaire, but different light settings. The luminaires 32, 34 and 37 use the same type of luminaire, but different light settings. The lighting devices 35 and 36 use the same type of luminaire, but different light settings.

In the example of fig. 4, it is assumed that fig. 3 depicts an example of a map of an augmented reality environment, i.e., rooms 42-47 are real rooms. Fig. 4 shows a view of a real room 43. The real room 43 includes a real table 53 and a real cabinet 51. The walls between the real room 43 and the corridor are made of translucent glass.

Fig. 5 depicts the lighting design superimposed on a view of the real world (i.e., the view of fig. 4). The lighting design is created by a virtual lighting device 32. Fig. 6 shows a view of the augmented reality environment after the user has entered the real room 43.

Fig. 7 shows a view 63 on the room 43 of fig. 3 shown on the display 9 of the mobile device 1 of fig. 1. The room 43 may be the real room 43 of fig. 4, or may be a virtual room that appears similar to the real room 43 of fig. 4.

After the user has walked through the virtual and/or augmented reality environment of fig. 3, the path taken by the user is determined. Fig. 8 depicts a path 71 taken by a user, which has been overlaid on the map of fig. 3. After the user starts moving through the virtual and/or augmented reality environment at a position close to the door of the floor 41, he does not enter the room 42 or 47, because the lighting design he can see through the glass is not attractive to him.

The user then enters the room 43 and walks around in this room, since the lighting design in this room attracts him. The user then enters the room 46 because the lighting design in this room initially looks attractive, but the user does not spend much time in this room because the lighting design does not meet his expectations. The user does not enter room number 44 because the lighting design he can see through the glass is not attractive to him. The user then enters the room 45 and walks around in this room, as the lighting design in this room attracts him.

The client's environment may be converted to a computer format, such as a revit file. The intended use of space may also be described in this format. The intended use of the space and its geometry are generally sufficient to generate some lighting designs. As the user walks through the environment, his path is monitored. If the user enters a room with high light levels, it may be determined that the user/customer prefers a lighting design with a higher light level.

If a user visits a room with a substantially different lighting design, a variety of different lighting designs, possibly a combination of multiple lighting designs, may be included in the lighting plan or in the next virtual and/or augmented reality environment, since it is clear that the user/customer is uncertain what he wants. Opening a glass door through which a user can see the opposite room and moving into the room can be considered an indication of interest. Looking into the room without opening the door can be considered an indication of dissatisfaction.

FIG. 9 illustrates a first embodiment of a method of presenting and determining user preferences for a plurality of different lighting designs in a virtual and/or augmented reality environment. Each of the plurality of different lighting designs comprises a lighting condition and/or appearance of the lighting device.

Step 101 includes presenting a plurality of different lighting designs in a virtual and/or augmented reality environment. The virtual and/or augmented reality environment includes at least a first spatial region demonstrating a first lighting design of the plurality of lighting designs and a second spatial region demonstrating a second lighting design of the plurality of lighting designs. For example, the first and second lighting designs may comprise different light levels, different light colors, different lamp appearances, different luminaire appearances, different lamp types, different luminaire positions, and/or different luminaire orientations.

Step 103 includes determining a path taken by a user in a virtual and/or augmented reality environment. Step 105 comprises determining whether the user prefers the first lighting design over the second lighting design or the second lighting design over the first lighting design based on the determined path.

In the embodiment of fig. 9, step 105 comprises determining that the user prefers the first lighting design over the second lighting design when it is determined that the user enters the first spatial region and does not enter the second spatial region, and determining that the user prefers the second lighting design over the first lighting design when it is determined that the user enters the second spatial region and does not enter the first spatial region.

In the embodiment of fig. 9, the path indicates how long the user spent in the first and second spatial regions, and step 105 further comprises determining whether the user prefers the first lighting design over the second lighting design or the second lighting design over the first lighting design based on how long the user spent in the first and second spatial regions.

In alternative embodiments, step 105 may include only one of considering whether the user entered the spatial region and considering how long the user spent in the spatial region, and/or may include considering alternative preference determination criteria. In the embodiment of fig. 9, step 111 comprises including the first lighting design or the second lighting design in the lighting plan according to the determined user preferences. Step 113 includes outputting the lighting plan.

FIG. 10 illustrates a second embodiment of a method of presenting and determining user preferences for a plurality of different lighting designs in a virtual and/or augmented reality environment.

Step 121 includes determining a current view (e.g., an initial view) of the virtual and/or augmented reality environment based on the user's real or virtual location in the virtual and/or augmented reality environment. The virtual and/or augmented reality environment includes a first spatial region and a second spatial region. For example, these spatial regions may be real rooms or virtual rooms.

Step 123 comprises rendering a (2D or 3D) image corresponding to the current view. The image includes a plurality of pixels. The image may represent a virtual world (e.g., a virtual room) or may be augmented information (e.g., including a real room) superimposed on the user's view of the real world. Depending on the user's location in the virtual and/or augmented reality environment, one or more lighting designs may be visible in the rendered image.

Step 125 includes receiving an input. For example, the input may be a user input or a sensor input. For example, the user may be able to provide user input to indicate a direction in which he wants to walk through the virtual reality environment. The user may also indicate the speed he wants to walk. The sensor input may be used to determine the physical location of the user, for example to determine which real room the user has entered. For example, the sensor input may be received from a GPS system or other position sensor, an accelerometer, and/or a camera.

Step 126 includes determining what type of input is provided. If the user indicates a direction in which he wants to move, or if he moves to another spatial region in the augmented reality environment, step 121 is repeated and a new view is determined in step 121 based on the input. If the user indicates in his user input that he wishes to exit the virtual and/or augmented reality environment and/or if the user moves to a region of space that is not part of the augmented reality environment, step 127 is performed after step 126.

Step 127 includes determining the path taken by the user based on the input received in step 125. Step 105 comprises determining whether the user prefers the first lighting design over the second lighting design or the second lighting design over the first lighting design based on the determined path.

FIG. 11 illustrates a third embodiment of a method of presenting and determining user preferences for a plurality of different lighting designs in a virtual and/or augmented reality environment.

In contrast to the first embodiment of fig. 9, steps 111 and 113 are omitted in the third embodiment of fig. 11, and steps 131, 133, 135 and 137 are performed after step 105 of fig. 9. In an alternative embodiment, steps similar to steps 111 and 113 of FIG. 9 are performed after step 137 of FIG. 11.

Step 131 includes selecting a third lighting design and a fourth lighting design based on the user preferences determined in step 105. The third lighting design and the fourth lighting design have not been demonstrated in a virtual and/or augmented reality environment.

Step 133 includes presenting another plurality of different lighting designs in another virtual and/or augmented reality environment. The further virtual and/or augmented reality environment comprises at least a first further spatial region demonstrating a third lighting design and a second further spatial region demonstrating a fourth lighting design.

Step 135 includes determining additional paths taken by the user in additional virtual and/or augmented reality environments. Step 137 includes determining whether the user prefers the third lighting design over the fourth lighting design or prefers the fourth lighting design over the third lighting design based on the determined additional paths.

The additional virtual and/or augmented reality environment presented in step 131 may be significantly different from the virtual and/or augmented reality environment presented in step 101, or the virtual and/or augmented reality environment presented in step 101 may be replaced with the additional virtual and/or augmented reality environment in step 131 as the user moves (e.g., walks) through the virtual and/or augmented reality environment.

In the embodiment of fig. 11, the third and fourth lighting designs are generated automatically, for example using machine learning and genetic algorithms. When using machine learning and genetic algorithms, an initial set of lighting designs (which may be from an actual lighting designer or other customer) may be used as the basis for machine intelligence. Machine intelligence uses these designs to generate new designs, which are then evaluated by customers. Based on the customer feedback, the machine intelligence can provide new designs and in this way converge to the most preferred lighting system (which may use one or more lighting designs).

FIG. 12 illustrates a fourth embodiment of a method of presenting and determining user preferences for a plurality of different lighting designs in a virtual and/or augmented reality environment.

Compared to the second embodiment of fig. 10, step 126 is replaced by step 141, there is an additional step 143, step 127 is replaced by step 145, and step 105 comprises sub-step 147. In contrast to step 126 of fig. 10, it is further determined in step 141 of fig. 12 whether the user provides an input for adjusting the first lighting design and/or the second lighting design, for example by providing feedback such as likes or dislikes and/or too bright or too dark. If so, step 143 is performed next. Step 143 comprises adjusting the first lighting design and/or the second lighting design based on the further input. Step 121 is repeated after step 143 and a new view is determined in step 121 demonstrating the adjusted lighting design.

Step 145 includes determining a path taken by the user based on the input received in step 125. In the embodiment of fig. 12, the path indicates whether the user adjusted the first lighting design and/or the second lighting design. Sub-step 147 includes determining whether the user prefers the first lighting design over the second lighting design or prefers the second lighting design over the first lighting design based on whether the user adjusted the first lighting design and/or the second lighting design.

FIG. 13 depicts a block diagram illustrating an exemplary data processing system that may perform the method as described with reference to FIGS. 9-12.

As shown in FIG. 13, data processing system 300 may include at least one processor 302 coupled to memory elements 304 through a system bus 306. As such, the data processing system may store program code in storage element 304. Further, processor 302 may execute program code accessed from storage element 304 via system bus 306. In one aspect, a data processing system may be implemented as a computer adapted to store and/or execute program code. It should be appreciated, however, that data processing system 300 may be implemented in the form of any system that includes a processor and memory that is capable of performing the functions described herein.

Storage elements 304 may include one or more physical storage devices, such as, for example, local memory 308 and one or more mass storage devices 310. Local memory can refer to random access memory or other non-persistent storage device(s) typically used during actual execution of program code. The mass storage device may be implemented as a hard disk drive or other persistent data storage device. Processing system 300 can also include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the number of times program code must be retrieved from mass storage device 310 during execution. For example, if processing system 300 is part of a cloud computing platform, processing system 300 may also be able to use the storage elements of another processing system.

Input/output (I/O) devices, depicted as input device 312 and output device 314, may optionally be coupled to the data processing system. Examples of input devices may include, but are not limited to, a keyboard, a pointing device such as a mouse, a microphone (e.g., for voice and/or speech recognition), and so forth. Examples of output devices may include, but are not limited to, a monitor or display, speakers, and the like. Input and/or output devices may be coupled to the data processing system either directly or through intervening I/O controllers.

In one embodiment, the input and output devices may be implemented as a combined input/output device (shown in FIG. 13 with a dashed line surrounding input device 312 and output device 314). One example of such a combined device is a touch sensitive display, sometimes also referred to as a "touch screen display" or simply a "touch screen". In such embodiments, input to the device may be provided by moving a physical object (such as, for example, a user's stylus or finger) on or near the touch screen display.

Network adapters 316 may also be coupled to the data processing system to enable it to become coupled to other systems, computer systems, remote network devices, and/or remote storage devices through intervening private or public networks. A network adapter may include a data receiver for receiving data transmitted by the system, device, and/or network to data processing system 300 and a data transmitter for transmitting data from data processing system 300 to the system, device, and/or network. Modems, cable modem and Ethernet cards are examples of different types of network adapters that may be used with data processing system 300.

As shown in fig. 13, the storage element 304 may store an application program 318. In various embodiments, the application programs 318 may be stored in the local memory 308, one or more of the mass storage devices 310, or separate from the local memory and mass storage devices. It is to be appreciated that data processing system 300 also may execute an operating system (not shown in FIG. 13) that may facilitate the execution of application programs 318. Application 318, which may be embodied in executable program code, may be executed by data processing system 300, such as by processor 302. In response to executing the application, data processing system 300 may be configured to perform one or more operations or method steps described herein.

Various embodiments of the invention may be implemented as a program product for use with a computer system, wherein the program(s) of the program product define functions of the embodiments (including the methods described herein). In one embodiment, the program(s) may be embodied on a variety of non-transitory computer readable storage media, where, as used herein, the expression "non-transitory computer readable storage media" includes all computer readable media, with the sole exception being a transitory propagating signal. In another embodiment, the program(s) may be embodied on a variety of transitory computer-readable storage media. Illustrative computer readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., flash memory, floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored. The computer program may run on the processor 302 described herein.

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the embodiments of the present invention have been presented for purposes of illustration, but is not intended to be exhaustive or limited to the embodiments in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiments were chosen and described in order to best explain the principles of the invention and some practical applications, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

25页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种利用摄影测量法的模型重建的设备和方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!