Intelligent projection equipment and multi-screen display method

文档序号:1864968 发布日期:2021-11-19 浏览:10次 中文

阅读说明:本技术 智能投影设备及多屏显示方法 (Intelligent projection equipment and multi-screen display method ) 是由 董鹏 王光强 常亮 于 2021-04-02 设计创作,主要内容包括:本申请实施例提供了一种智能投影设备及多屏显示方法,所述智能投影设备包括:投影结构,数量为至少两个,用于投影形成虚拟显示屏;摄像头,用于采集所述虚拟显示屏上的操作反馈信息,并向控制器发送所述操作反馈信息;控制器,所述控制器内存储有每个虚拟显示屏对应的显示屏标识,所述控制器被配置为:接收所述操作反馈信息;获取应用程序对所述操作反馈信息的响应数据;若所述响应数据为包含一个显示屏标识的一组界面数据,在所述显示屏标识对应的虚拟显示屏上显示所述界面数据对应的界面;若所述响应数据为多组界面数据,分别在不同的虚拟显示屏上显示一组界面数据对应的界面。本申请解决了投影画面存在遮挡的技术问题。(The embodiment of the application provides intelligent projection equipment and a multi-screen display method, wherein the intelligent projection equipment comprises: the number of the projection structures is at least two, and the projection structures are used for projecting to form a virtual display screen; the camera is used for collecting operation feedback information on the virtual display screen and sending the operation feedback information to the controller; a controller, in which a display screen identifier corresponding to each virtual display screen is stored, the controller being configured to: receiving the operation feedback information; acquiring response data of the application program to the operation feedback information; if the response data is a group of interface data containing a display screen identifier, displaying an interface corresponding to the interface data on a virtual display screen corresponding to the display screen identifier; and if the response data are a plurality of groups of interface data, displaying interfaces corresponding to one group of interface data on different virtual display screens respectively. The technical problem that the projection picture is shielded is solved.)

1. An intelligent projection device, comprising:

the number of the projection structures is at least two, the projection structures are used for projecting to form virtual display screens, and the positions of the virtual display screens formed by the projection of the plurality of projection structures are different;

the camera is used for collecting operation feedback information on the virtual display screen and sending the operation feedback information to the controller;

a controller, in which a display screen identifier corresponding to each virtual display screen is stored, the controller being configured to:

receiving the operation feedback information;

acquiring response data of the application program to the operation feedback information;

if the response data is a group of interface data containing a display screen identifier, displaying an interface corresponding to the interface data on a virtual display screen corresponding to the display screen identifier;

and if the response data are a plurality of groups of interface data, displaying interfaces corresponding to one group of interface data on different virtual display screens respectively.

2. The smart projection device of claim 1, wherein the controller is further configured to:

and if the response data is a group of interface data which does not contain any display screen identification, displaying an interface corresponding to the interface data on a default virtual display screen.

3. The intelligent projection device of claim 1, wherein the displaying the interfaces corresponding to the set of interface data on the different virtual display screens respectively comprises:

respectively extracting display screen identifications from each group of interface data to obtain a virtual display screen corresponding to each group of interface data;

and respectively displaying the interfaces corresponding to each group of interface data on the corresponding virtual display screens.

4. The smart projection device of claim 1, wherein the controller is further configured to:

and if the response data is the exit instruction of the application program, respectively exiting the interface of the application program on each virtual display screen corresponding to the application program.

5. The intelligent projection device of claim 1, wherein the obtaining application response data to the operational feedback information further comprises:

and if the control instruction corresponding to the operation feedback information is a starting instruction of an application program, starting the application program and sending the display screen identifier to the application program.

6. The smart projection device of claim 1, wherein the controller is further configured to:

and controlling at least two virtual display screens to display different starting interfaces according to a starting instruction input by a user.

7. The smart projection device of claim 1, wherein the controller is further configured to:

and controlling a virtual display screen to display a standby interface according to a standby instruction input by a user.

8. A multi-screen display method is used for an application program, and is characterized by comprising the following steps:

receiving a starting instruction;

responding to the starting instruction, and detecting whether the current equipment is provided with a plurality of display screen identifications;

if the current equipment is provided with a plurality of display screen identifications, entering a multi-screen display mode;

if the current equipment only has one display screen identifier, entering a single-screen display mode;

the application program is configured to generate a set of response data with a display screen identifier according to an operation instruction of a user or generate multiple sets of response data in the multi-screen display mode, and the application program is configured to generate response data without the display screen identifier according to an operation instruction input by the user in the single-screen display mode.

9. A multi-screen display method as recited in claim 8, wherein in the multi-screen display mode, the sets of response data generated by the application program are respectively provided with different display screen identifiers.

10. A multi-screen display method is used for intelligent projection equipment, and is characterized by comprising the following steps:

receiving operation feedback information input by a user;

acquiring response data of the application program to the operation feedback information;

if the response data is a group of interface data containing a display screen identifier, displaying an interface corresponding to the interface data on a virtual display screen corresponding to the display screen identifier;

and if the response data are a plurality of groups of interface data, displaying interfaces corresponding to one group of interface data on different virtual display screens respectively.

Technical Field

The application relates to the technical field of projection, in particular to intelligent projection equipment and a multi-screen display method.

Background

Projection equipment is a kind of equipment that can project image or video and show on the object, compares with the display device that can directly show image or video on the screen, and projection equipment has obtained more and more users' liking gradually with advantages such as its projection interface size is big, the installation is nimble, can protect eyesight.

The conventional projection device obtains content to be projected by connecting with a display device, and a projection interface of the projection device generally coincides with a display interface of the display device. Because the display area of the display device may be small, when the whole information to be displayed cannot be tiled and displayed, some information is generally displayed in an overlapping manner, and part of the display content is blocked.

Disclosure of Invention

In order to solve the technical problem of poor projection effect, the application provides an intelligent projection device and a multi-screen display method.

In a first aspect, the present application provides an intelligent projection device, comprising:

the number of the projection structures is at least two, the projection structures are used for projecting to form virtual display screens, and the positions of the virtual display screens formed by the projection of the plurality of projection structures are different;

the camera is used for collecting operation feedback information on the virtual display screen and sending the operation feedback information to the controller;

a controller, in which a display screen identifier corresponding to each virtual display screen is stored, the controller being configured to:

receiving the operation feedback information;

acquiring response data of the application program to the operation feedback information;

if the response data is a group of interface data containing a display screen identifier, displaying an interface corresponding to the interface data on a virtual display screen corresponding to the display screen identifier;

and if the response data are a plurality of groups of interface data, displaying interfaces corresponding to one group of interface data on different virtual display screens respectively.

In some embodiments, the controller is further configured to:

and the response data is a group of interface data which does not contain any display screen identification, and an interface corresponding to the interface data is displayed on a default virtual display screen.

In some embodiments, the controller is further configured to:

and if the response data is the exit instruction of the application program, respectively exiting the interface of the application program on each virtual display screen corresponding to the application program.

In a second aspect, the present application provides a multi-screen display method for an application, including:

receiving a starting instruction;

responding to the starting instruction, and detecting whether the current equipment is provided with a plurality of display screen identifications;

if the current equipment is provided with a plurality of display screen identifications, entering a multi-screen display mode;

if the current equipment only has one display screen identifier, entering a single-screen display mode;

the application program is configured to generate a set of response data with a display screen identifier according to an operation instruction of a user or generate multiple sets of response data in the multi-screen display mode, and the application program is configured to generate response data without the display screen identifier according to an operation instruction input by the user in the single-screen display mode.

In a third aspect, the present application provides a multi-screen display method for an intelligent projection device, including:

receiving operation feedback information input by a user;

acquiring response data of the application program to the operation feedback information;

if the response data is a group of interface data containing a display screen identifier, displaying an interface corresponding to the interface data on a virtual display screen corresponding to the display screen identifier;

and if the response data are a plurality of groups of interface data, displaying interfaces corresponding to one group of interface data on different virtual display screens respectively.

The application provides an intelligent projection equipment and a multi-screen display method, which has the beneficial effects that:

according to the embodiment of the application, the display screen identification is stored in the intelligent projection equipment, so that when an application program needs to display a plurality of interfaces, the application program can have the plurality of display screen identifications according to the intelligent projection equipment to generate a plurality of groups of interface data, the interfaces corresponding to the plurality of groups of interface data are separately displayed on a plurality of virtual display screens by the intelligent projection equipment, the interfaces can not be shielded, and the display effect is improved.

Drawings

In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without any creative effort.

FIG. 1 is a schematic diagram of an intelligent desk lamp according to some embodiments of the present disclosure;

FIG. 2 is a schematic illustration of a position where a virtual display screen is formed in some embodiments of the present application;

FIG. 3 is another schematic diagram of an intelligent desk lamp according to some embodiments of the present disclosure;

FIG. 4 is a schematic illustration of a standby interface in some embodiments of the present application;

FIG. 5A is a schematic view of an educational interface in some embodiments of the present application;

FIG. 5B is a schematic view of an entertainment interface in some embodiments of the present application;

FIG. 5C is a schematic view of a wall launcher interface according to some embodiments of the present application;

FIG. 6 is a schematic illustration of interface switching in some embodiments of the present application;

FIG. 7 is a diagram of a process management page in some embodiments of the present application;

FIG. 8 is a schematic illustration of a display control interface D0 corresponding to a two-screen control D according to some embodiments of the present application;

FIG. 9 is a schematic flow chart illustrating a timing sequence for a user to use a desk lamp according to some embodiments of the present disclosure;

FIG. 10A is a schematic view of a VS1 display page on a first virtual display screen of a user's smart desk lamp in chess game mode according to some embodiments of the present application;

FIG. 10B is a schematic view of a second virtual display VS2 display page of the user's smart desk lamp in chess game mode according to some embodiments of the present application;

FIG. 11A is a schematic view of a VS1 display page on a first virtual display screen of a user's smart desk lamp when creating a room interface in some embodiments of the present application;

FIG. 11B is a schematic view of a second virtual display VS2 display page of the user's smart desk lamp when creating a room interface in some embodiments of the present application;

FIG. 12 is a schematic diagram illustrating a first virtual display VS1 display page of the user's smart desk lamp when inviting buddies in some embodiments of the present application;

FIG. 13 is a schematic diagram illustrating a VS1 display page on a first virtual display screen of a user (inviting user) smart desk lamp after a friend accepts an invitation in some embodiments of the present application;

FIG. 14 is a schematic view of a first virtual display VS1 of the smart desk lamp of the user (invited user) after accepting the invitation in some embodiments of the present application;

fig. 15 is a schematic diagram of a display page VS2 of a second virtual display screen of a user (inviting user) smart desk lamp when sending a video call request to a friend in some embodiments of the present application;

fig. 16 is a schematic diagram of a display page VS2 of a second virtual display screen of a user (inviting user) smart desk lamp when sending a video call request to a friend in some embodiments of the present application;

FIG. 17A is a schematic view of a second virtual display VS1 display page of a user's smart desk lamp during play of chess according to some embodiments of the present application;

FIG. 17B is a schematic view of a first virtual display VS1 display page of the user's smart desk lamp during chess playing in some embodiments of the present application.

Detailed Description

To make the purpose and embodiments of the present application clearer, the following will clearly and completely describe the exemplary embodiments of the present application with reference to the attached drawings in the exemplary embodiments of the present application, and it is obvious that the described exemplary embodiments are only a part of the embodiments of the present application, and not all of the embodiments.

It should be noted that the brief descriptions of the terms in the present application are only for the convenience of understanding the embodiments described below, and are not intended to limit the embodiments of the present application. These terms should be understood in their ordinary and customary meaning unless otherwise indicated.

The terms "first," "second," "third," and the like in the description and claims of this application and in the above-described drawings are used for distinguishing between similar or analogous objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.

The terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.

The term "module" refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.

The desk lamp is a lighting tool for assisting people in reading, learning and working, common household equipment in the life of people develops towards the direction of intellectualization along with the progress of technology, and under the wave, the functions of the desk lamp are more and more abundant. In some embodiments, the desk lamp may be provided with a projection structure, and may be connected to a display device to implement a projection function of the projector.

However, in the conventional projection technology, the content of the display device is directly projected, and if the content displayed by the display device is the superimposed content, the projected picture also displays the superimposed content, for example, in a video chat scene, a chat window is usually superimposed on the original interface, which may block the content of the original interface and affect the viewing experience of the user.

In order to solve the technical problem, in the embodiment of the application, the plurality of projection structures are arranged on the desk lamp, so that the desk lamp can be projected to obtain a plurality of pictures, and then the plurality of interfaces of the application program are displayed on the plurality of projected pictures in a separated manner, so that the plurality of interfaces are not shielded from each other.

Fig. 1 is a schematic structural diagram of an intelligent desk lamp provided in some embodiments of the present application, and as shown in fig. 1, the intelligent desk lamp includes: at least two projection structures, a controller 200, and a camera 300. The controller 200 is connected to the at least two projection structures and the camera 300, respectively, so that the controller 200 can control the operating states of the at least two projection structures and acquire the contents photographed by the camera 300.

In some embodiments, the intelligent desk lamp further comprises a base, a support, and an illumination bulb, the illumination lamp, the projection structure, and the camera can all be disposed on the support, the support can be disposed on the base, and the controller 200 can be disposed inside the base.

In some embodiments, the controller 200 in the intelligent desk lamp is provided with a network communication function, so that the current intelligent desk lamp can communicate with other intelligent desk lamps, an intelligent terminal (e.g., a mobile phone) or a server (e.g., a network platform) to obtain the projection content.

In some embodiments, the controller 200 in the intelligent desk lamp may further be installed with an operating system, so that the projection can be performed without being connected to a display device, and of course, the intelligent desk lamp installed with the operating system may also have a network communication function, so that the intelligent desk lamp can communicate with devices such as a server, and the like, so as to implement some network functions, such as upgrading the operating system, installing an application program, interacting with other intelligent desk lamps, and the like. Referring to fig. 1, the at least two projection structures include at least a first projection structure 110 and a second projection structure 120, the first projection structure 110 is used for projecting to form a first virtual display screen VS 1; the second projection structure 120 is used for projecting to form a second virtual display screen VS2, and the first virtual display screen VS1 and the second virtual display screen VS2 are formed at different positions.

For example, fig. 2 is a schematic diagram of the forming positions of the virtual display screens in some embodiments of the present application, and as shown in fig. 2, the first virtual display screen VS1 projected by the first projection structure 110 may be formed on a desktop of a desk on which the smart desk lamp is disposed, and the second virtual display screen VS2 projected by the second projection structure 120 may be formed on a wall surface against which the desk rests. It can be understood that, in practical application, the forming position of the virtual display screen can be adjusted according to actual needs.

It can be understood that the specific display content of the first virtual display VS1 may be different from the specific display content of the second virtual display VS2, so that the two virtual displays cooperate with each other to achieve the purpose of comprehensively displaying content with large capacity and high complexity.

After the at least two projection structures are respectively projected to form the at least two virtual display screens, the camera 300 is configured to collect operation feedback information on the at least one virtual display screen, and send the operation feedback information to the controller 200, where the operation feedback information may specifically be operation click information of a user on display content on the virtual display screen, and the like.

For example, the camera 300 may acquire only operation feedback information on the first virtual display VS1, may acquire only operation feedback information on the second virtual display VS2, or may acquire operation feedback information on the first virtual display VS1 and the second virtual display VS2 simultaneously.

In addition, the number of the cameras 300 can be set to be multiple based on the number of the virtual display screens required to collect the operation feedback information, that is, a single camera collects the operation feedback information of a single virtual display screen.

In some embodiments, the camera 300 may be an infrared camera, so that the infrared detection technology may be utilized to ensure the accuracy of the acquired operation feedback information in the poor light scenes such as at night and in cloudy days.

In some embodiments, the camera 300 may collect user images in addition to collecting operation feedback information, so as to realize functions of video call, photographing, and the like.

After the at least two projection structures are respectively projected to form the at least two virtual display screens, the controller 200 is configured to control projection contents of the at least two projection structures on the at least two virtual display screens, respectively, and adjust the projection contents of the at least two projection structures based on the operation feedback information on the at least one virtual display screen after receiving the operation feedback information sent by the camera 300.

For example, the controller 200 may adjust only the projection content of the first projection structure 110 on the first virtual display VS1 based on the operation feedback information, may adjust only the projection content of the second projection structure 120 on the second virtual display VS2 based on the operation feedback information, and may adjust both the projection content of the first projection structure 110 on the first virtual display VS1 and the projection content of the second projection structure 120 on the second virtual display VS2 based on the operation feedback information.

It can be understood that the two projection structures are only an exemplary illustration of performing multi-screen projection on the intelligent desk lamp in the present application, and the at least two projection structures may also be other number of projection structures, for example, 3 or more than 3, and the present application does not specifically limit the number of projection structures of the intelligent desk lamp. For convenience of explanation, the embodiments of the present application each take two projection structures as an example, and the technical solutions of the present application are explained.

In some embodiments, the number of the controllers 200 may be multiple, and may specifically be the same as the number of the projection structures, so that a single controller may be provided to control the projection content of a single projection structure, and there is a communication connection between the controllers.

For example, for the case that the at least two projection structures include at least the first projection structure 110 and the second projection structure 120, the controller 200 may specifically include a first controller and a second controller, wherein the first controller controls the projection content of the first projection structure 110, the second controller controls the projection content of the second projection structure 120, and the first controller and the second controller are in communication connection.

In some embodiments, the plurality of controllers may be centralized, that is, the plurality of controllers are disposed at the same designated location in the intelligent desk lamp; the controller can be separately arranged, that is, the controller is respectively arranged corresponding to the corresponding projection structure, and the like, and the arrangement positions of the controllers are not limited in the application.

Some embodiments provide an intelligence desk lamp, this intelligence desk lamp includes two at least projection structures, and this application belongs to the intelligence desk lamp of many screen projections promptly to, the formation position of the virtual display screen of every projection structure projection formation is different, thereby can form a plurality of virtual display screens in different positions, through the cooperation of a plurality of virtual display screens demonstration, in order to play the purpose of the big, high display content of complexity of comprehensive display capacity. Meanwhile, operation feedback information on the virtual display screen is obtained through the camera, and projection content is adjusted according to the operation feedback information, so that interactivity among different users can be further enhanced.

Fig. 3 is another schematic structural diagram of an intelligent desk lamp according to some embodiments of the present application, and as shown in fig. 3, the first projection structure 110 includes: a first light source 112, a first imaging unit 114, and a first lens 116; the first light source 112 is configured to emit light, the first imaging unit 114 is configured to form a pattern based on the light emitted by the first light source 112, and the first light source 112 and the first imaging unit 114 are configured to cooperate to form a first projection pattern; the first lens 116 is used for magnifying the first projection pattern, so that the first light source 112, the first imaging unit 114 and the first lens 116 cooperate to display corresponding display contents on the first virtual display screen VS1 corresponding to the first projection structure 110. In some embodiments, first light source 112 includes at least one of a tri-color light source, a white light source, and a blue light wheel light source. The three-color light source and the blue-light wheel light source are used to emit light of different colors, so that color content can be displayed on the first virtual display screen VS 1. The white light source is used for emitting white light so as to realize the basic desk lamp lighting function.

In some embodiments, first light source 112 may include only a white light source, such that a basic lighting function may be achieved. The first light source 112 may comprise only a three-color light source or only a blue light wheel light source so that color content may be displayed on the first virtual display screen VS1 when projection is desired. The first light source 112 may include a white light source and a three-color light source, or a white light source and a blue light wheel light source, or a white light source, a three-color light source and a blue light wheel light source, so as to realize the basic illumination function and simultaneously display the color content on the first virtual display screen VS 1.

Referring to fig. 3, the second projection structure 120 includes: a second light source 122, a second imaging unit 124, and a second lens 126; the second light source 122 is configured to emit light, the second imaging unit 124 is configured to form a pattern based on the light emitted by the second light source 122, and the second light source 122 and the second imaging unit 124 are configured to cooperate to form a second projection pattern; the second lens 126 is used for magnifying the second projection pattern, so that the second light source 122, the second imaging unit 124 and the second lens 126 cooperate to display corresponding display contents on the second virtual display screen VS2 corresponding to the second projection structure 120.

In some embodiments, the second light source 122 includes at least one of a three-color light source, a white light source, and a blue light wheel light source. The three-color light source and the blue-light wheel light source are used to emit light of different colors, so that color content can be displayed on the second virtual display screen VS 2. The white light source is used for emitting white light so as to realize the basic desk lamp lighting function.

In some embodiments, the second light source 122 may include only a white light source, such that a basic lighting function may be achieved. The second light source 122 may comprise only a three-color light source or only a blue light wheel light source so that color content may be displayed on the second virtual display screen VS2 when projection is desired. The second light source 122 may include a white light source and a three-color light source, or a white light source and a blue light wheel light source, or a white light source, a three-color light source and a blue light wheel light source, so as to realize the basic illumination function and display the color content on the second virtual display screen VS 2.

In some embodiments, the lens in the projection structure is a focus-adjustable lens, and the controller 200 can adjust the size of the projected image by adjusting the focus of the lens.

In some embodiments, the first light source 112 and the second light source 122 may be different light sources respectively providing light beams for different imaging units, or may be the same light source providing light beams for different imaging units through splitting.

In one embodiment, the smart desk lamp may include one or more of the following components: a storage component, a power component, an audio component, and a communication component.

The storage component is configured to store various types of data to support operation at the intelligent desk lamp. Examples of such data include student exercises, examination papers, electronic textbooks, exercise analysis and interpretation, etc. for projection display on the intelligent desk lamp, and types of data specifically include documents, pictures, audio, and video, etc. The memory components may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.

The power supply assembly provides power for various components of the intelligent table lamp. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the intelligent desk lamp.

The audio component is configured to output and/or input an audio signal. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the smart desk lamp is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a storage component or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.

The communication component is configured to facilitate wired or wireless communication between the intelligent desk lamp and other devices. The intelligent desk lamp can access a wireless network based on a communication standard, such as WiFi, 4G or 5G, and the like, or a combination of the WiFi, the 4G or the 5G. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.

In one embodiment, the principle of the camera 300 collecting the operation feedback information is explained.

The actual imaging interface may become a virtual display screen, which in some embodiments may be a desktop, a wall, a dedicated projection screen, or other surface structure that presents a projected image, and the user's operation on the virtual display screen is identified by an image captured by a camera or location information transmitted by a controlled location sensing device.

Some exemplary operational acquisition modes are as follows:

(I) motion track

After the controller 200 controls the projection structure to project on the virtual display screen, the camera 300 captures an image of the finger of the user on the virtual display screen in real time and sends the image to the controller 200. The controller 200 recognizes the user's fingertip in the image by a fingertip tracking technique, and thus, the operation trajectory of the user on the virtual display screen can be obtained based on the movement trajectory of the fingertip.

In some embodiments, in the image acquired by the camera 300, if only a single finger is included, the operation trajectory of the user is determined based on the fingertip of the finger; if a plurality of fingers are included, the operation trajectory of the user is determined based on the fingertip of a specific finger, and the specific finger may be, for example, an index finger or the like, or the trajectories of a plurality of fingertips are determined.

(II) click operation

The camera 300 of the intelligent desk lamp is arranged above the finger of the user, when the user performs the finger pressing and clicking operation, the fingertip image of the user can be changed to a certain extent, and the controller 200 can identify whether the user performs the clicking operation according to the change of the fingertip image.

For example, when the position of the camera 300 is fixed, when the user performs the finger down-click operation, the distance between the fingertip and the camera 300 changes, and in the image acquired by the camera 300, the size of the fingertip pattern before the finger down-click is larger than the size of the fingertip pattern after the finger down-click, so that when the size of the fingertip pattern changes, it can be considered that the user performs the down-click operation.

For example, when some users click, the fingertips may bend downward, which may cause the image to have a deformed or incomplete fingertip pattern, and thus, when the fingertip pattern is deformed or displayed incompletely, the user may be considered to have performed a click operation.

It can be understood that when the fingertip image is just changed, the user can be considered to be in a fingertip pressing state; after the fingertip image is restored, the user can be considered to be in a fingertip lifting state, so that the fingertip image of the user changes once, and the user can be considered to have performed an effective click operation once.

(III) Single click operation

When the controller 200 confirms that the user is in a state of fingertip pressing, the position coordinates of the position Point1 of the state and the time stamp are recorded.

When it is confirmed that the user is in a state where the fingertip is lifted, the position coordinates and the time stamp of the position Point2 in the state are recorded.

If the distance between the position coordinates of position Point1 and position coordinates of position Point2 is smaller than a preset threshold, and the time difference between the timestamp of position Point1 and the timestamp of position Point2 is smaller than the preset threshold, it is considered that the user has performed a single-click operation at position Point1 (same as Point 2).

(IV) double click operation

When the controller 200 confirms that the user has performed the first valid click operation, the position coordinates and the time stamp of the position Point3 of the click operation are recorded.

When it is confirmed that the user has performed the second valid click operation, the position coordinates and the time stamp of the position Point4 of the click operation are recorded.

If the distance between the position coordinates of the position Point3 and the position coordinates of the position Point4 is smaller than a preset threshold, and the time difference between the timestamp of the position Point3 and the timestamp of the position Point4 is smaller than the preset threshold, it is considered that the clicking operation performed by the user at the position points Point3 and Point4 constitutes an effective double-click operation.

It is understood that the recognition principle of the multi-click operation is similar to that of the double-click operation, and the description thereof is omitted.

(V) Long-pressing operation

When the controller 200 confirms that the user is in a state of fingertip pressing, the position coordinates of the position Point5 of the state and the time stamp are recorded.

When it is confirmed that the user is in a state where the fingertip is lifted, the position coordinates and the time stamp of the position Point6 in the state are recorded.

If the distance between the position coordinates of position Point5 and position coordinates of position Point6 is smaller than a preset threshold, and the time difference between the timestamp of position Point5 and the timestamp of position Point6 is greater than the preset threshold, it is considered that the user has performed a long-press operation at position Point5 (same as Point 6).

(VI) sliding operation

When the controller 200 confirms that the user is in a state of fingertip pressing, the position coordinates of the position Point7 of the state and the time stamp are recorded.

When it is confirmed that the user is in a state where the fingertip is lifted, the position coordinates and the time stamp of the position Point8 in the state are recorded.

If the distance between the position coordinates of position Point7 and position coordinates of position Point8 is greater than a preset threshold, and the time difference between the time stamp of position Point7 and the time stamp of position Point8 is greater than a preset threshold, it is considered that the user has performed a sliding operation between position points 7 to 8.

It is understood that the sliding operation may be a lateral sliding, such as a leftward sliding or a rightward sliding, a longitudinal sliding, such as an upward sliding or a downward sliding, or an oblique sliding, such as an upward leftward sliding or a downward rightward sliding, etc.

In some embodiments, the sliding distance and the sliding direction (positive X-axis to the right and positive Y-axis to the up in the default position coordinate system) may be determined based on the position coordinates of the position Point7 and Point 8.

For example, the sliding distance may be calculated by the following formula:

where dis is the sliding distance, x7 and y7 are the position coordinates of position Point7, and x8 and y8 are the position coordinates of position Point 8.

When x7 is equal to x8 or the difference between x7 and x8 is smaller than a preset threshold, if y7> y8, the sliding direction is downward sliding; if y7< y8, the sliding direction is up.

When y7 is equal to y8 or the difference between y7 and y8 is smaller than a preset threshold, if x7> x8, the sliding direction is towards sitting; if x7< x8, the sliding direction is to the right.

When x7> x8, if y7> y8, the sliding direction is downward and leftward sliding; if y7< y8, the sliding direction is to slide left and up.

When x7< x8, if y7> y8, the sliding direction is rightward and downward sliding; if y7< y8, the sliding direction is to the right and upward.

In one embodiment, the user's operation on the virtual display screen may also be simulated by other peripheral devices. The peripheral devices are specifically such as an induction pen and the like.

In some embodiments, the pen point of the induction pen is provided with a position sensor, the position sensor transmits the position of the pen point to the controller 200 of the intelligent desk lamp in real time, and therefore the intelligent desk lamp obtains the operation track of the user on the virtual display screen through the position change condition transmitted by the controller 200.

In addition, the nib of the induction pen is provided with a pressing induction structure (for example, a pressure sensor and the like), when a user needs to perform a click operation, the user can touch the desktop by using the induction pen, so that the pressing induction structure acquires a pressing signal and sends the acquired pressing signal to the controller 200 of the intelligent desk lamp, and the controller 200 can determine the position where the user clicks based on the current position of the note and the pressing signal.

It is understood that the principle of other operations (such as double-click, long-press, etc.) performed by the user through the sensing pen is the same as that performed through the fingertip, and the detailed description thereof is omitted here.

For convenience of understanding, in the following embodiments, the intelligent desk lamp includes a single controller 200, two projection structures (a first projection structure 110 and a second projection structure 120), and a single camera 300, where the camera 300 only collects operation feedback information on a first virtual display screen VS1, a first virtual display screen VS1 formed by projection of the first projection structure 110 is formed on a desktop of a desk on which the intelligent desk lamp is disposed, and a second virtual display screen VS2 formed by projection of the second projection structure 120 is formed on a wall surface against which the desk is disposed.

In one embodiment, the control of the operating state of the intelligent desk lamp is explained. In some embodiments, the working state of the intelligent desk lamp includes power-on, power-off and standby.

The lamp body of the intelligent desk lamp is provided with a switch key. The on-off key can be a physical pressing type structure or a touch structure, and when the on-off key is in the physical pressing type structure, if a user presses the on-off key, the on-off key can be considered to be in an active state; when the on-off key is a touch structure, if a limb part (e.g., a finger) of a user is placed on the surface of the on-off key, the on-off key can be considered to be in an active state.

(1) Starting up

When the intelligent desk lamp is currently in a power-off state, if the on-off key is detected to be in an active state, the intelligent desk lamp is turned on, namely the intelligent desk lamp is adjusted to be in a working state from the power-off state.

In some embodiments, the switch key being in an active state refers to a state in which the power-on key is pressed.

In some embodiments, when the intelligent desk lamp is currently in the power-off state, if it is detected that the on-off key is in the active state, the intelligent desk lamp is not directly turned on, at this time, the duration of the on-off key being in the active state is obtained, if the duration of the on-off key being in the active state reaches a first preset duration T1 (for example, 3 seconds), the intelligent desk lamp is turned on, otherwise, the intelligent desk lamp is not turned on, so that the situation that the intelligent desk lamp is turned on due to mistaken touch of a user can be avoided.

When the intelligent desk lamp is turned on, the first virtual display screen VS1 and the second virtual display screen VS2 both display preset boot animations, and the boot animations displayed by the first virtual display screen VS1 and the second virtual display screen VS2 may be the same or different. After the time that the first virtual display screen VS1 and the second virtual display screen VS2 show the boot animation reaches a second preset time period T2 (e.g., 5-10 seconds), the first virtual display screen VS1 and the second virtual display screen VS2 enter the corresponding working interfaces.

(2) Shutdown

When the intelligent desk lamp is currently in a power-on state, if the on-off key is detected to be in an active state, the intelligent desk lamp is turned off, namely the intelligent desk lamp is adjusted from a working state to a power-off state.

In some embodiments, when the intelligent desk lamp is currently in a power-on state, if it is detected that the on-off key is in an active state, the intelligent desk lamp is not directly turned off, at this time, the duration of the on-off key in the active state is obtained, if the duration of the on-off key in the active state reaches a third preset duration T3 (for example, 3 seconds, the third preset duration T3 may be the same as the first preset duration T1), the intelligent desk lamp is turned off, otherwise, the intelligent desk lamp is not turned off, so that a situation that the intelligent desk lamp is turned off due to mistaken touch of a user can be avoided.

When the intelligent desk lamp is turned off, the first virtual display screen VS1 and the second virtual display screen VS2 both display preset shutdown animations, and the shutdown animations displayed by the first virtual display screen VS1 and the second virtual display screen VS2 may be the same or different. The time for displaying the shutdown animation of the first virtual display screen VS1 and the second virtual display screen VS2 is specifically a fourth preset time length T4 (e.g., 3 seconds or less than 3 seconds).

(3) Standby

When the intelligent desk lamp is currently in the on state, but no user operation is detected, and the duration of no user operation is detected to reach a fifth preset duration T5 (for example, 60 seconds), the intelligent desk lamp enters the standby state. Specifically, the situation that the user does not perform any operation on the intelligent desk lamp or the intelligent desk lamp currently maintains a certain static state (for example, projection content is unchanged) is avoided, so that a standby interface of the intelligent desk lamp is avoided under special conditions such as video playing and the like because the user does not perform any operation.

In some embodiments, when the triggering condition of the power on and the power off is that the on-off key is in an active state and the duration of the on-off key satisfies a corresponding preset duration, the intelligent desk lamp may also be adjusted to a standby state when it is detected that the on-off key is in a transient active state. The condition that the on-off key is in the active state for a short time is a condition that the duration of the on-off key is less than the first preset duration T1 and the third preset duration T3, for example, a user presses the on-off key for a short time.

After the intelligent desk lamp enters a standby state, the first virtual display screen VS1 turns off, and the second virtual display screen VS2 displays a preset standby interface. Fig. 4 is a schematic diagram of a standby interface, and as shown in fig. 4, the standby interface may be a time interface.

After the intelligent desk lamp enters the standby state, if the camera 300 collects operation feedback information on the first virtual display screen VS1, the intelligent desk lamp exits the standby state and enters the working state.

In some embodiments, when the middle intelligent desk lamp receives a preset key operation or a preset signal input, the intelligent desk lamp exits from the standby state and enters into the working state.

In one embodiment, the working interfaces of the first virtual display VS1 and the second virtual display VS2 are explained.

After the intelligent desk lamp enters a power-on state, the first virtual display screen VS1 enters a Launcher interface, and the second virtual display screen VS2 enters a wall Launcher interface. The Launcher Interface, i.e., a UI Interface (User Interface or User Interface), is a medium for a User to interact with the smart desk lamp. The Launcher interface comprises at least a first interface and a second interface, wherein the first interface is specifically an educational interface, and the second interface is specifically an entertainment interface. It is understood that the Launcher interface may also include other types of interfaces.

Fig. 5A is a schematic diagram of an education interface, and as shown in fig. 5A, the education interface includes an open course module, an online teaching system module, a teaching channel module, a practice exercise module, a simulation examination module, an operation correction module, a celebrity live broadcast module, a setting module, a desk lamp control module, and the like. In addition, the educational interface may include other modules related to learning, not to be enumerated herein.

Fig. 5B is a schematic diagram of an entertainment interface, as shown in fig. 5B, the entertainment interface includes a music module, a video module, a game module, and other application management modules, and the like, and the game module is, for example, a desktop piano, a chess game, and the like. In addition, the entertainment interface may also include other modules related to the health entertainment application, not to mention here.

It should be noted that, in some embodiments, for the first virtual display VS1, the interface lower side of the Launcher interface includes 4 control keys, which are a return key, a home key, a progress key, and a minimize key. The return key is used for returning to a previous page, the home page key is used for directly returning to a corresponding Launcher interface, the process key is used for displaying all current processes to conduct process management, and the minimize key is used for minimizing the application running in the current foreground.

In some embodiments, the user can switch the Launcher interface by sliding the screen left/right. For example, when the current Launcher interface is an educational interface, the user may slide right to switch to an entertainment interface; when the current Launcher interface is an entertainment interface, the user can slide left to switch to an educational interface.

Fig. 5C is a schematic diagram of a wall launcher interface, which may display weather information and time information, as shown in fig. 5C. In addition, the wall launcher interface may also display other information, such as user-defined information, and the like, to name but a few.

In some embodiments, after the smart desk lamp enters the power-on state from the standby state, the second virtual display screen VS2 may further continue to display a preset standby interface, for example, a time interface as shown in fig. 4, without displaying content.

In some embodiments, the password entering the entertainment interface may be preset, that is, when the education interface is switched to the entertainment interface, the corresponding password needs to be input to successfully switch, otherwise, the user cannot use various entertainment functions of the entertainment interface, so that parents can be helped to better control the use of the entertainment interface.

FIG. 6 is a schematic diagram of interface switching, as shown in FIG. 6, a user needs to input a password to enter an entertainment interface.

In some embodiments, a sliding operation of a user is obtained, a Launcher interface to be displayed is determined according to the direction of the sliding operation and a current Launcher interface, if the Launcher interface to be displayed is configured with a password, a floating layer is set while the Launcher interface to be displayed is displayed, and a password input control is displayed on the floating layer, at this time, the Launcher interface to be displayed cannot obtain a focus. And if the password input is successful in the password input control received by the user, canceling the floating layer, setting the Launcher interface to be displayed as an acquirable focus, and if the password input is unsuccessful, maintaining the password input control to be displayed on the floating layer.

In some embodiments, when displaying the Launcher interface and the floating layer to be displayed, if a sliding operation of a user is received, switching to the next interface according to the direction of the sliding operation.

In one embodiment, the process control management of the first virtual display VS1 and the second virtual display VS2 is explained.

After a user clicks a process key on the first virtual display screen VS1, the camera 300 collects the operation feedback information and sends the operation feedback information to the controller 200, the controller 200 controls the first projection structure 110 to display a process management page on the first virtual display screen VS1, and the user can manage a currently running process through the process management page, for example, close, switch the display screens, and the like.

Fig. 7 is a schematic diagram of a process management page, and as shown in fig. 7, a plurality of currently running processes are displayed in a stacked manner, and a user may select a process to be managed by scrolling the processes up and down. For example, the currently managed process is process one, and by scrolling down, the currently managed process can be switched to process two.

In some embodiments, after the currently managed process is switched, if the user clicks and selects the currently managed process, the content of the currently managed process is directly displayed on the corresponding virtual display screen.

In addition, a label A is arranged in a corresponding area of different processes, and the label A is used for identifying the virtual display screen on which the process is currently displayed. After receiving an input instruction for process management, acquiring an operating process and a position presented by the process, and displaying in a label sub-control of a process control controlled by the position presented before receiving the instruction according to an application corresponding to the process (a first virtual display VS1 or a second virtual display VS2), wherein the content displayed by the label sub-control at the position presented before receiving the instruction is different for the application corresponding to the process. For example, label a1 corresponding to process one in fig. 7 is "one screen", which means that process one is displayed on the first virtual display VS 1; label a2 corresponding to process two is "two screens", i.e. it means that process two is displayed on second virtual screen VS 2; label a3 corresponding to process three is "one screen + two screen", that is, it indicates that process three is simultaneously displayed on the first virtual display VS1 and the second virtual display VS 2. At this time, if the user clicks to select the first process, directly displaying the content corresponding to the first process on the first virtual display screen VS 1; if the user clicks the selected process two, directly displaying the content corresponding to the process two on a second virtual display screen VS 2; if the user clicks and selects the process three, the content corresponding to the process three is directly displayed on the first virtual display screen VS1 and the second virtual display screen VS 2.

It is to be understood that the multiple processes currently running may also be displayed in other forms, for example, in a form of multiple small windows, and the display form of the multiple processes is not limited in this application.

In addition, the expression form of the tag content is not limited in the application, the tag content can be any combination of numbers, letters and characters, and a user can clearly and intuitively know which virtual display screen the process is displayed on according to the tag content. For example, the tag content "one screen" in fig. 7 may also be "1", "two screens" may also be "2", and the like.

In some embodiments, referring to fig. 7, after the user opens the process management page, in addition to seeing which virtual display screen the process is displayed on, the user can also perform process shutdown management on the currently running process. In some embodiments, the user may close the running process through process close control B.

For example, the user may close process one by clicking on process close control B1 corresponding to process one, close process two by clicking on process close control B2 corresponding to process two, and close process three by clicking on process close control B3 corresponding to process three. Thus, the user can shut down the process needing to be shut down in a targeted manner.

In some embodiments, referring to fig. 7, the process management page is further provided with a one-touch closing control B0, and when the number of processes that need to be closed currently is large, the user can close all currently running processes by clicking the one-touch closing control B0 without clicking the process closing controls corresponding to the processes one by one, so that the process closing efficiency can be improved.

After the user completes the process closing operation, the page before the process management can be returned by clicking the return key. In some embodiments, after completing the process shutdown operation, if the user clicks on a blank in the process management interface, an educational interface in the Launcher interface is returned.

In some embodiments, referring to fig. 7, after opening the process management page, the user may switch the virtual display screen corresponding to the currently running process, in addition to performing process closing management on the currently running process. In some embodiments, the user may display screen switch the process through screen switch control C.

In some embodiments, when the currently managed process is process one, the tag a1 corresponding to process one is "one screen", that is, process one is displayed on the first virtual display screen VS1 before receiving the process management instruction, at this time, the user may perform a left-sliding operation corresponding to the screen switching control C1 to switch process one to the second virtual display screen VS2 for display, at this time, the first virtual display screen VS1 no longer displays the content of process one, that is, the process one is displayed by switching between different screens.

In some embodiments, when the currently managed process is process one, the user may perform a right-slide operation corresponding to screen toggle control C2, and process one is still presented on one screen.

In some embodiments, when the currently managed process is process two, the tag a2 corresponding to process two is "two screens", that is, process two is displayed on the second virtual display screen VS2 before receiving the process management instruction, at this time, the user may perform a right-swipe operation corresponding to the screen switching control C2 to switch process two to the first virtual display screen VS1 for display, at this time, the second virtual display screen VS2 no longer displays the content of process two, that is, the process two is displayed in an off-screen switching manner.

In some embodiments, when the currently managed process is process two, the user may perform a left-sliding operation corresponding to the screen switch control C1, and process two is still displayed on two screens.

In some embodiments, when the currently managed process is process three, the label a3 corresponding to process three is "one screen + two screens", that is, at this time, process two is displayed on the first virtual display screen VS1 and the second virtual display screen VS2 before receiving the process management instruction, at this time, the user may perform a left-sliding operation corresponding to the screen switching control C1 to switch process three to the second virtual display screen VS2 for display, at this time, the first virtual display screen VS1 no longer displays the content of process three, that is, the process three is switched from the two-screen display to the single-screen display (the second virtual display screen VS2) for display. At this time, the first virtual display screen VS1 may display a Launcher interface, for example, an educational interface or the like.

In some embodiments, the user may also perform a right-sliding operation corresponding to the screen switching control C2 to switch process three to the first virtual display screen VS1 for display, at which time, the second virtual display screen VS2 no longer displays the content of process three, that is, the process three is switched from the dual-screen display to the single-screen (the first virtual display screen VS1) display. At this time, the second virtual display screen VS2 may display a Launcher interface or a time interface.

In some embodiments, the screen toggle controls C1 and C2 are conditional hide/display, i.e., the screen toggle controls C1 and C2 are not always in a display state.

In some embodiments, the screen-toggle control C1 is displayed when the currently managed process is currently displayed on the first virtual display VS1 and can be toggled to the second virtual display VS2 for display (e.g., process one in fig. 7), and the screen-toggle control C2 is hidden.

When the currently managed process is currently displayed on the second virtual display VS2 and can be switched to the first virtual display VS1 for display (for example, process two in fig. 7), the screen switch control C2 is displayed, and at this time, the screen switch control C1 is hidden.

When the currently managed process is currently displayed on the first virtual display VS1 and the second virtual display VS2, and can be switched to the first virtual display VS1 or the second virtual display VS2 for display separately (for example, process three in fig. 7), the screen switching control C1 and the screen switching control C2 are simultaneously displayed.

When the currently managed process is currently displayed on the first virtual display VS1 and the second virtual display VS2 and cannot be switched to the first virtual display VS1 or the second virtual display VS2 for display alone (for example, process three in fig. 7), the screen switching control C1 and the screen switching control C2 are hidden.

Therefore, by setting the screen switching controls C1 and C2 as conditional hiding/displaying, it is possible to avoid interference with the user when the user switches the display.

In some embodiments, the user may also set the screen toggle controls C1 and C2 to always display or always hide by setting them.

In one embodiment, the screen display control of the second virtual display screen VS2 is explained.

Referring to fig. 5A, a two-screen control D is disposed on a display page of the first virtual display screen VS1, and when a user clicks the two-screen control D, the first virtual display screen VS1 displays a display control interface of the second virtual display screen VS2 above an original interface.

Fig. 8 is a schematic diagram of a display control interface D0 corresponding to the two-screen control D, and as shown in fig. 8, the display control interface D0 includes a display area D1, a return-to-one-screen control D2, a close-two-screen control D3, a touch area D4, and an exit control D5.

The display area D1 is used to display the currently running process of the second virtual display screen VS 2.

The return-to-one screen control D2 is used to switch the content displayed on the second virtual display screen VS2 to the first virtual display screen VS1 for display. For example, for a process, if the currently displayed content of the process is displayed on the second virtual display VS2, and if the user clicks the return-to-one-screen control D2, the process switches to the first virtual display VS1 for display, and at this time, the second virtual display VS2 may display a Launcher interface or a time interface.

The close two-screen control D3 is used to switch the content displayed on the second virtual screen VS2 to the first virtual screen VS1 for display, while the second virtual screen VS2 is turned off. For example, for a certain process, if the currently displayed content of the process is displayed on the second virtual display screen VS2, if the user clicks the off-two-screen control D3, the process switches to the first virtual display screen VS1 for display, at this time, the second virtual display screen VS2 switches to the off-screen state, that is, the second virtual display screen VS2 does not display the content.

Touchpad D4 is used to control the operation of second virtual display screen VS2 (in a similar manner to a notebook touchpad). According to the operation of the user in the touch area D4 acquired by the camera, the mapping relation between the position of the operation of the user in the touch area D4 and the preset position, the operation of the user is mapped to the operation of the corresponding position of the VS2 of the second virtual display screen, and then the control for executing the operation is determined according to the position of the VS2 control of the second virtual display screen. For example, the user can manipulate a screen pointer on the second virtual display screen VS2 through the touch area D4, so as to perform corresponding operations.

The exit control D5 is used to collapse the display control interface D0. For example, the user may collapse the display control interface D0 by clicking the exit control D5, and at this time, the icon of the two-screen control D is displayed on the first virtual display screen VS 1.

In one embodiment, a plurality of intelligent table lamps are connected through a network to form a communication system, a plurality of users corresponding to the plurality of intelligent table lamps can perform information interaction through the communication system, and the plurality of users can be users with different identity types. For example, the plurality of users may include a first number of first identity users and a second number of second identity users, and so on.

In some embodiments, the communication system may be an online teaching system, and the plurality of users corresponding to the plurality of intelligent table lamps may specifically be one or more teachers, a plurality of students, and the like.

It can be understood that when using the multi-user communication function in the communication system, the contents displayed on the first virtual display VS1 and the second virtual display VS2 of the corresponding smart desk lamp can be different for users with different identities. In the present application, when the teacher and the student use the online teaching system, the projection display contents of the teacher's and the student's intelligent desk lamps may be different.

In one embodiment, a scenario of a user using an entertainment interface is explained, wherein the user may specifically be a person such as a teacher or a student or a parent.

In some embodiments, for example, a chess game, a user clicks on a chess game module of the entertainment interface to enter a chess game mode. In some embodiments, the board game may be a gobang game, and it is understood that other board games may be used, such as chess, military chess, etc.

In some embodiments, a plurality of display screen identifiers may be stored in the controller of the desk lamp, and are used to indicate that the desk lamp may project a plurality of virtual display screens, the plurality of display screen identifiers may be represented by a screen function, for example, the screen function may be screen ═ { VS1, VS2}, which indicates that the desk lamp may project two virtual display screens, one is VS1, the other is VS2, and both VS1 and VS2 are one display screen identifier, the screen function may be stored in an underlying program of the controller, and the application program may call the screen function, and obtain the number and name of the virtual display screens of the desk lamp according to the screen function.

In some embodiments, a multi-screen function parameter may be stored in the controller of the desk lamp, where the parameter is used to indicate that the desk lamp can project a plurality of virtual display screens, the parameter may be stored in a bottom layer program of the controller, and the application program may call the parameter, so as to obtain that the desk lamp has a plurality of virtual display screens according to the parameter. Of course, the multi-screen function parameters may also include the display screen identifiers and the number of all virtual display screens.

In some embodiments, some or all of the applications in fig. 5A and/or 5B are configured to be able to recognize the multi-screen function parameter or screen function. The application programs capable of recognizing the multi-screen function parameters or the screen functions may include two sets of display interfaces, one set of display interfaces is used for displaying in the single-screen display mode to adapt to a single-screen device, such as a smart phone, a smart television, and the like, and the other set of display interfaces is used for displaying in the multi-screen display mode to adapt to a multi-screen device, such as an intelligent desk lamp in the embodiment of the present application. In the single-screen display mode, the application program is configured to generate response data without the display screen identification according to an operation instruction input by a user; in the multi-screen display mode, the application program is configured to generate a group of response data with one display screen identifier according to an operation instruction of a user, or generate multiple groups of response data, and if multiple groups of response data are generated, each group of response data may or may not have one display screen identifier.

Of course, the applications capable of recognizing the display screen identifiers or the multi-screen function parameters may also include only one set of display interfaces, i.e., interfaces used in the multi-screen display mode. When the intelligent desk lamp projects the interfaces of the application programs supporting multi-screen display, different interfaces of the application programs can be projected on different virtual display screens.

In the following, a multi-screen display method is described in detail by taking a gobang application as an application supporting a multi-screen display mode, and only two interfaces are displayed at most simultaneously.

For a user, for example, a first user, who plays a chess game through the projection function of the table lamp, see fig. 9, fig. 9 is a schematic diagram illustrating a timing sequence of the user using the table lamp according to some embodiments, in fig. 9, the first user may interact with the table lamp, and the table lamp may interact with a server of an application program, which may be a gobang application program, for example. The desk lamp comprises an infrared camera, a desk lamp body, a desktop projection structure and a wall projection structure, the infrared camera can be the camera 300 in the embodiment, the desk lamp body can comprise a base, a support and a lighting bulb, the controller 200 in the embodiment can be arranged in the base of the desk lamp body, the desktop projection structure can be the first projection structure 110 in the embodiment, and the wall projection structure can be the second projection structure 120 in the embodiment. The server of the application program can comprise a game server and a video server, wherein the game server is used for realizing basic functions of the game, such as man-machine fighting, friend fighting and the like, and the video server is used for realizing a video chat function of the game.

It should be noted that, for a game application, the video chat function may be implemented by the game developer itself or by the cooperation of the game developer and a third party, if the video chat function is implemented by the game developer itself, the game server and the video server may be one hardware device or different hardware devices, and if the video chat function is implemented by the cooperation of the game developer and the third party, the game server and the video server may be different hardware devices. In addition, besides the video chat function, one game application may have other expansion functions, and more servers may be needed to implement the expansion functions. As shown in fig. 9, once the user turns on the desk lamp by operating the power-on key on the desk lamp body, the infrared camera can continuously monitor the user operation to obtain real-time operation feedback information. In some embodiments, after the desk lamp is turned on, the desktop projection structure may project the educational interface shown in fig. 5A on the desktop where the desk lamp is located, and the wall projection structure may project the wall starter interface shown in fig. 5C on the wall.

The user can enter the entertainment interface shown in fig. 5B through a sliding operation.

In some embodiments, a user can start a game by clicking a game icon on the entertainment interface shown in fig. 5B, and if the game icon clicked by the user is an icon of a game one, the infrared camera can detect the user operation to obtain operation feedback information, and send the operation feedback information to the controller of the table lamp body. Illustratively, game one is a gobang application.

In some embodiments, the controller transmits the start instruction to the gobang application program according to the control instruction corresponding to the operation feedback information as the start instruction of the gobang application program, so that the gobang application program generates the start interface according to the start instruction.

In some embodiments, after receiving a start instruction sent by the controller, the gobang application may actively detect whether the current device has multi-screen function parameters, enter a multi-screen display mode if the multi-screen function parameters can be detected, and enter a single-screen display mode if the multi-screen function parameters cannot be detected.

In some embodiments, the controller may also actively send the multi-screen function parameter to the gobang application according to the start instruction, enter the multi-screen display mode if the gobang application can recognize the multi-screen function parameter, and enter the single-screen display mode if the gobang application cannot recognize the multi-screen function parameter.

Taking the example that the gobang application enters the multi-screen display mode according to the starting instruction, the gobang application can generate two sets of interface data. The two sets of interface data may include interface data for a game master interface and interface data for a game guide interface. The interface data of the game main interface can be provided with main screen parameters, the interface data of the game guide interface can be provided with auxiliary screen parameters, the main screen parameters indicate that the corresponding interface needs to be displayed on the main display screen, the auxiliary screen displays indicate that the corresponding interface needs to be displayed on the auxiliary display screen, and for the intelligent table lamp, the main display screen can be a first virtual display screen, namely a desktop display screen, and the auxiliary display screen can be a second virtual display screen, namely a wall display screen. Thus, the primary screen parameters may include a display screen identification of the first virtual display screen and the secondary screen parameters may include a display screen identification of the second virtual display screen.

In some embodiments, the controller may transmit the interface data of the game main interface to the desktop projection structure according to the main screen parameter in the interface data of the game main interface, so that the desktop projection structure projects the game main interface shown in fig. 10A on the desktop; the controller can transmit the interface data of the game guide page to the wall projection structure according to the auxiliary screen parameters in the interface data of the game guide interface, so that the wall projection structure projects the game guide interface shown in fig. 10B on the wall.

Referring to fig. 10A, a schematic diagram of a display page VS1 of a first virtual display screen of a user intelligent desk lamp in a chess game mode is shown, as shown in fig. 10A, for a gobang application, a main interface of the gobang application includes a plurality of fight entrances, specifically including a friend fight entrance M1, a man-machine fight entrance M2, a high score fight entrance M3, and a chess force evaluation entrance M4. The friend battle entrance M1 is used for creating a friend battle room, the man-machine battle entrance M2 is used for creating a man-machine battle room, the high-score battle entrance M3 is used for creating a high-score battle room, and the chess power evaluation entrance M4 is used for creating a chess power evaluation battle room.

Referring to fig. 10A, the game main interface further includes other function controls M5, where the function controls M5 specifically include a friend control, a leader board control, a mail control, an activity control, a make-up control, a setting control, and the like.

Referring to fig. 10B, a schematic diagram of a second virtual display VS2 of the smart desk lamp of the user in a chess game mode is shown, as shown in fig. 10B, after a gobang is selected, before the game is started, the second virtual display VS2 of the smart desk lamp displays a prompt message "chess meet opponents, wait for access".

In some embodiments, the friend match entry M1 in fig. 10A is a friend match control, and once the user clicks the friend match control in fig. 10A, the infrared camera may detect the user operation, obtain operation feedback information, and send the operation feedback information to the controller of the table lamp body.

In some embodiments, the controller transmits the trigger instruction to the gobang application program according to the control instruction corresponding to the operation feedback information as the trigger instruction of the friend engagement control, so that the gobang application program generates response data according to the trigger instruction.

In some embodiments, after receiving a trigger instruction of a friend fight control sent by the controller, the gobang application program may set the current display mode to a multi-screen display mode, and generate two sets of interface data according to the trigger instruction of the friend fight control. The two sets of interface data may include interface data for creating a room interface and interface data for a game introduction interface. The interface data of the room interface can be set with main screen parameters, and the interface data of the game introduction interface can be set with auxiliary screen parameters. The controller can transmit interface data for creating the room interface to the desktop projection structure according to the home screen parameters, so that the desktop projection structure projects the created room interface shown in FIG. 11A on the desktop; the controller can transmit the interface data of the game introduction interface to the wall projection structure according to the parameters of the auxiliary screen, so that the wall projection structure projects the game introduction interface shown in FIG. 11B on the wall.

Referring to fig. 11A, after the user clicks on the friend match entry M1, the first virtual display screen VS1 enters the create room interface. The creation room interface is displayed with game related information of the current user, such as the winning hands, the breaking pass performance, the total number of hands, and the like. In addition, the create room interface also includes an invite friend control, and the user can invite friends to play by clicking the invite friend control.

Referring to FIG. 11B, upon the user clicking on the friend match entry M1, the second virtual display VS2 enters the game introduction interface. The game introduction interface can display relevant information of the current game, such as game introduction, game rules, game functions and the like.

In some embodiments, creating the room interface further includes a video call control and a start game control. The video call control is used for sending a video call request to the friend to carry out video call after the friend accepts the game invitation and enters the room. The game starting control is used for starting game fight after the friend accepts the game invitation and enters the room.

In some embodiments, the room interface may be created without a game start control, and the game play is automatically started after the friend accepts the game invitation and enters the room.

In some embodiments, after the user clicks the invite friend control in fig. 11A, the infrared camera may detect the user operation, obtain operation feedback information, and send the operation feedback information to the controller of the desk lamp body.

In some embodiments, the controller transmits the trigger instruction to the gobang application program according to the control instruction corresponding to the operation feedback information as the trigger instruction for inviting the friend control, so that the gobang application program generates response data according to the trigger instruction.

In some embodiments, after receiving a trigger instruction of the invitation friend control sent by the controller, the gobang application program may set the current display mode to the multi-screen display mode, and generate a set of interface data according to the trigger instruction of the invitation friend control. The set of responsive interface data may include interface data for a buddy list interface. And interface data of the friend list interface can be provided with main screen parameters. The controller may transmit the interface data of the buddy list interface to the desktop projection structure according to the home screen parameter, so that the desktop projection structure projects the buddy list interface shown in fig. 12 on the wall surface. In some embodiments, if the gobang application generates only one set of interface data, and the interface data is data that needs to be projected on the desktop projection structure, the interface data may not include any display screen identifier, and the controller of the intelligent desk lamp displays the interface corresponding to the interface data on a default display screen, that is, the first virtual display screen on the desktop, according to the interface data without the display screen identifier.

Referring to fig. 12, a schematic diagram of a buddy list interface may display the current state of each buddy of the user one, and the user one may click an invitation control corresponding to the buddy to invite the buddy to play a game. The invitation control corresponding to the current online user is in a clickable state, and the invitation control corresponding to the current offline user is in a non-clickable state.

In some embodiments, after the first user clicks the invitation control corresponding to the second user in fig. 12, the infrared camera may detect the user operation, obtain operation feedback information, and send the operation feedback information to the controller of the table lamp body.

In some embodiments, the controller transmits the trigger instruction to the gobang application program according to the control instruction corresponding to the operation feedback information as the trigger instruction of the invitation control corresponding to the second user, and the gobang application program sends a game joining request to the game server according to the trigger instruction, so that the server sends game invitation prompt information to the gobang application program of the second user according to the game joining request.

If the second user accepts the invitation of joining the game, the server sends the message that the second user accepts the invitation of joining the game to the gobang application program of the first user, and the gobang application program of the first user can generate interface data of a game preparation interface according to the message. The interface data of the game preparation interface can be provided with main screen parameters. The controller may transmit the interface data of the game preparation interface to the desktop projection structure according to the home screen parameter, so that the desktop projection structure projects the game preparation interface shown in fig. 13 on the desktop.

If the second user refuses to join the game invitation, the server sends the message that the second user refuses to join the game invitation to the gobang application program of the first user, and the gobang application program of the first user can generate interface data of a prompt interface for refusing to join the game according to the message. Wherein the interface data can be provided with secondary screen parameters. The controller can transmit the interface data of the prompt interface to the wall projection structure according to the auxiliary screen parameters, so that the wall projection structure projects the prompt interface on the wall, and a user can know that a user II refuses to join the game.

Referring to fig. 13, a second user accepts the game invitation of a current user (user one) and enters the room, and at this time, a first virtual display screen VS1 of the first user intelligent desk lamp displays game related information of the second user and a plurality of controls, such as a video call control, a start game control, a return control, a home page control, a progress control, a minimization control, and the like.

Fig. 14 is a schematic diagram of a first virtual display VS1 display page of the smart desk lamp of the user (the invited user) after accepting the invitation, as shown in fig. 14, after the second user enters the room, the first virtual display VS1 displays the game related information of the first user and the second user at the same time, which is different from fig. 13 in that the display positions of the first user and the second user in fig. 14 are reversed. In some embodiments, the first virtual display screen VS1 of user two displays a page containing a video call control for initiating a video call request to user one. In some embodiments, the first virtual display VS1 display page for user two may also contain a start game control for initiating a game request to user one.

In some embodiments, the user may select to start the video call after starting the game once the start game control is clicked in fig. 13, or the user may select to start the game after starting the video call once the video call control is clicked in fig. 13.

If the user clicks the start video call control in fig. 13, the infrared camera can detect the user operation, obtain operation feedback information, and send the operation feedback information to the controller of the desk lamp body.

In some embodiments, the controller transmits the trigger instruction to the gobang application program if the control instruction corresponding to the operation feedback information is the trigger instruction of the video call control, so that the gobang application program generates response data according to the trigger instruction.

In some embodiments, after receiving a trigger instruction for inviting a friend control sent by the controller, the gobang application program may set the current display mode to a multi-screen display mode, generate a set of interface data according to the trigger instruction for the video call control, generate a video call request, and send the video call request to the server. The set of response interface data may include interface data for a video call invitation interface, which may be provided with secondary screen parameters. The controller can transmit the interface data of the video call invitation interface to the wall projection structure according to the auxiliary screen parameters, so that the wall projection structure projects the video call invitation interface shown in fig. 15 on the desktop.

In some embodiments, referring to fig. 16, the video call invitation interface may further be provided with an invitation cancellation control, and if the user clicks the invitation cancellation control, the controller may control the projection screen of the wall projection structure to return to the interface shown in fig. 11B.

In some embodiments, the video call invitation interface may include a user image, i.e., a local picture, captured by a camera on a user one's desk lamp.

In some embodiments, after receiving the video call request, the server forwards the video call request to the gobang application of the second user, and after receiving the video call request, the gobang application of the second user displays a popup prompt message of receiving the video invitation on the first virtual display screen VS1 of the second user, at this time, the second user may click to accept, so as to establish a video call connection with the first user, and at the same time, the popup prompt message of the first virtual display screen VS1 disappears. Of course, user two may also click to decline, thereby declining to establish a video call connection with user one, while the pop-up message of the first virtual display screen VS1 disappears.

If the second user accepts the video call invitation, the server can acquire a local picture of the second user, which is acquired by the intelligent desk lamp of the second user, and send the local picture of the second user and a message I of the second user accepting the video call invitation to a gobang application program of the first user, and the gobang application program of the first user can generate video chat interface data according to the message and the local picture of the second user. And the video chat interface data can be provided with auxiliary screen parameters. The controller can transmit the interface data of the game preparation interface to the wall projection structure according to the secondary screen parameters, so that the wall projection structure projects the video call interface shown in fig. 17A on the wall.

And if the second user refuses the video call invitation, the server sends a message that the second user refuses to join the video call invitation to the gobang application program of the first user, and the gobang application program of the first user can generate interface data of a prompt interface for refusing to join the video call invitation according to the message. Wherein the interface data can be provided with secondary screen parameters. The controller can transmit the interface data of the prompt interface to the wall projection structure according to the auxiliary screen parameters, so that the wall projection structure projects the prompt interface on the wall, and a user can know that a user II refuses to join the video call.

After the video call connection is established, video pictures of the user one and the user two are displayed on the second virtual display screen VS2 of the user one and the user two, respectively. Referring to fig. 17A, for user one, the video frame of user one may be displayed above the video frame of user two. For user two, the video frame of user two can be displayed above the video frame of user one.

In this way, after the video call connection is established, the smart desk lamp of the first user displays the game interface shown in fig. 13 on the first virtual display VS1, and displays the video interface at the opposite end of fig. 17A on the second virtual display VS2, so that the user feels a scene close to an actual face-to-face game. After the first user clicks the start game control in fig. 13, the first user and the second user start game play, and at this time, a five-character chessboard and the current situation are displayed on the first virtual display VS1 of both players. The interface displayed on the first virtual display screen VS1 of user one can be seen in fig. 17B.

Fig. 17B is a schematic diagram of a first virtual display screen VS1 display page of the first intelligent desk lamp of the user during the chess playing process, as shown in fig. 17B, in the chess playing process, the first virtual display screens VS1 display pages of the first intelligent desk lamp of the first user and the second intelligent desk lamp of the second user may be the same, that is, both the first virtual display screens VS1 display pages include a gobang board N1 and a functional control N2, and the user may place chessmen by clicking the intersection of the vertical and horizontal lines on the gobang board N1.

The functional controls N2 specifically include an exit-to-game control, a repentance control, a reopening control and an exit video control, wherein the exit-to-game control is used for exiting the current game, the repentance control is used for canceling the previous play, the reopening control is used for starting a new game, and the exit video control is used for exiting the video call.

In some embodiments, when the user clicks any one of the function controls N2, a popup prompt message for confirming whether to operate pops up, so that a click error situation can be prevented.

In some embodiments, if the user wants to end playing chess, the user can click the homepage control on the interface shown in fig. 17B, and the infrared camera can detect the user operation, obtain operation feedback information, and send the operation feedback information to the controller of the desk lamp body.

In some embodiments, the controller generates an exit instruction of the gobang application according to the trigger instruction, and enables the gobang application to respond according to the trigger instruction, wherein the control instruction corresponding to the operation feedback information is the trigger instruction of the homepage control.

In some embodiments, the triggering instruction of the home page control may be an exit instruction of the current application program, and after receiving the triggering instruction of the home page control sent by the controller, the gobang application may exit the interface of the application program on each virtual display screen corresponding to the application program, for example, exit the interface shown in fig. 17B on the first virtual display screen, and exit the interface shown in fig. 17A on the second virtual display screen, so that the interface shown in fig. 5B is displayed again on the first virtual display screen, and the interface shown in fig. 5C is displayed again on the second virtual display screen.

According to the embodiment, the display screen identification is stored in the intelligent projection equipment, so that when the application program needs to display a plurality of interfaces, the application program can have the plurality of display screen identifications according to the intelligent projection equipment to generate a plurality of groups of interface data, the interfaces corresponding to the plurality of groups of interface data are separately displayed on a plurality of virtual display screens by the intelligent projection equipment, the interfaces cannot be shielded, and the display effect is improved.

Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

35页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:智能显示设备及应用管理方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类