Screen control method, device and equipment based on artificial intelligence and storage medium

文档序号:1952059 发布日期:2021-12-10 浏览:19次 中文

阅读说明:本技术 基于人工智能的屏幕控制方法、装置、设备及存储介质 (Screen control method, device and equipment based on artificial intelligence and storage medium ) 是由 赵国胜 于 2021-09-16 设计创作,主要内容包括:本申请公开一种基于人工智能的屏幕控制方法、装置、设备及存储介质,包括:采集预设目标显存中待显示的渲染数据;对所述渲染数据进行图像化处理生成目标帧图,并对所述目标帧图进行二值化处理生成二值帧图;将所述二值帧图输入至预设的控屏模型中,其中,所述控屏模型为预先训练至收敛状态,用于根据所述二值帧图对显示所述渲染数据的屏幕数量进行分类的神经网络模型;读取所述控屏模型输出的屏幕数量,并根据所述屏幕数量构建显示集群,其中,所述显示集群包括一个主显示器和至少一个副显示器;将所述渲染数据分发至所述主显示器和至少一个副显示器中进行分屏显示。避免人工频繁切换分屏显示策略,提高了分屏显示的适用性和使用率。(The application discloses screen control method, device, equipment and storage medium based on artificial intelligence, including: acquiring rendering data to be displayed in a preset target video memory; performing imaging processing on the rendering data to generate a target frame image, and performing binarization processing on the target frame image to generate a binary frame image; inputting the binary frame map into a preset screen control model, wherein the screen control model is a neural network model which is trained to a convergence state in advance and used for classifying the number of screens displaying the rendering data according to the binary frame map; reading the number of screens output by the screen control model, and constructing a display cluster according to the number of the screens, wherein the display cluster comprises a main display and at least one auxiliary display; and distributing the rendering data to the main display and at least one auxiliary display for split screen display. The method avoids manual frequent switching of the split-screen display strategy, and improves the applicability and the utilization rate of split-screen display.)

1. A screen control method based on artificial intelligence is characterized by comprising the following steps:

acquiring rendering data to be displayed in a preset target video memory;

performing imaging processing on the rendering data to generate a target frame image, and performing binarization processing on the target frame image to generate a binary frame image;

inputting the binary frame map into a preset screen control model, wherein the screen control model is a neural network model which is trained to a convergence state in advance and used for classifying the number of screens displaying the rendering data according to the binary frame map;

reading the number of screens output by the screen control model, and constructing a display cluster according to the number of the screens, wherein the display cluster comprises a main display and at least one auxiliary display;

and distributing the rendering data to the main display and at least one auxiliary display for split screen display.

2. The screen control method based on artificial intelligence, according to claim 1, wherein the reading the number of screens output by the screen control model and constructing a display cluster according to the number of screens comprises:

calling the online state information of a plurality of display screens according to a preset calling interface;

screening at least two target display screens from the plurality of display screens based on the online status information, wherein the target display screens are display screens which are characterized as being corresponding to the online status information;

and constructing the display cluster in the at least two target display screens according to the screen number.

3. The artificial intelligence based screen control method of claim 2, wherein the constructing the display cluster in the at least two target display screens according to the screen number comprises:

lighting the at least two target display screens to enable the at least two target display screens to be in a light-emitting state;

collecting a set image of the plurality of display screens when the at least two target display screens are in a light-emitting state;

extracting a display image formed by the at least two target display screens in the set image;

preprocessing the display image, and determining a central point of the preprocessed display image based on a preset central determination rule;

determining the main display based on the center point and selecting the at least one secondary display around the main display according to the main display and the number of screens.

4. The artificial intelligence based screen control method of claim 3, wherein the pre-processing the display image and determining the center point of the pre-processed display image based on a preset center determination rule comprises:

acquiring relative position information between a target user and the plurality of display screens;

performing punctiform processing on a target display screen in the display image by taking the display screen as a unit to generate a punctiform graph of the display image;

marking the position mark of the target user in the dot diagram according to the relative position information;

inputting the dot diagram carrying the position mark into a preset central point recognition model, wherein the central point recognition model is a neural network model which is trained to a convergence state in advance and used for recognizing the central point of the image;

and reading the central point of the display image output by the central point identification model.

5. The artificial intelligence based screen control method of claim 1, wherein the distributing the rendering data to the main display and at least one sub-display for split screen display comprises:

acquiring parameter information of the at least one auxiliary display, wherein the parameter information comprises position coordinates of the auxiliary display and a first screen size;

constructing at least one application window according to the first screen size, wherein the size of the application window is the same as the first screen size;

and correspondingly delivering the at least one application window into the at least one secondary display based on the position coordinates.

6. The artificial intelligence based screen control method of claim 5, wherein the obtaining parameter information of the at least one sub-display comprises:

acquiring resolution information of the main display and direction information between the at least one auxiliary display;

determining the at least one secondary display position coordinate according to the resolution information and the direction information;

generating parameter information of the at least one sub-display according to the position coordinates and the first screen size.

7. The artificial intelligence based screen control method of claim 5, wherein the distributing the rendering data into the main display and at least one sub-display for split-screen display comprises:

acquiring a second screen size of the main display;

segmenting the rendering data according to a first screen size and the second screen size to generate first display data and at least one second display data;

and sending the first display data to the main display, and distributing the at least one second display data to the at least one application window for display.

8. A screen control device based on artificial intelligence, comprising:

the acquisition module is used for acquiring rendering data to be displayed in a preset target video memory;

the processing module is used for carrying out imaging processing on the rendering data to generate a target frame image and carrying out binarization processing on the target frame image to generate a binary frame image;

the classification module is used for inputting the binary frame map into a preset screen control model, wherein the screen control model is a neural network model which is trained to be in a convergence state in advance and is used for classifying the number of screens displaying the rendering data according to the binary frame map;

the reading module is used for reading the number of screens output by the screen control model and constructing a display cluster according to the number of the screens, wherein the display cluster comprises a main display and at least one auxiliary display;

and the execution module is used for distributing the rendering data to the main display and at least one auxiliary display for split screen display.

9. A computer device comprising a memory and a processor, the memory having stored therein computer readable instructions which, when executed by the processor, cause the processor to carry out the steps of the artificial intelligence based screen control method according to any one of claims 1 to 7.

10. A storage medium storing computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the artificial intelligence based screen control method according to any one of claims 1 to 7.

Technical Field

The embodiment of the invention relates to the field of artificial intelligence, in particular to a screen control method, a screen control device, screen control equipment and a storage medium based on artificial intelligence.

Background

Today, the internet is developed at a high speed, and the 'mass information' becomes a landmark keyword of the current network, and for users, how to face the mass information which is increased explosively, the information is served for people instead of being submerged and controlled by the information becomes a crucial problem to be solved.

The use of two-screen and multi-screen displays frees the user from the view limitations of a single-screen computer. The working efficiency of users can be improved to a great extent by utilizing the multi-screen display, and the comfort level is increased. The number of times a user switches back and forth between different documents and applications is reduced or even completely avoided, thereby reducing the time required to complete each task.

The inventor of the invention found in research that in the prior art, after the dual-screen display and the multi-screen display are manually set by an operator, the host computer completes the screen splitting operation by mechanically executing the screen splitting display strategy of a user, and each time the screen splitting display strategy needs to be adjusted, the operator needs to manually set a new screen splitting strategy, so that the adaptability and the use efficiency of the screen splitting display are reduced.

Disclosure of Invention

The embodiment of the invention provides a screen control method, a device, equipment and a storage medium based on artificial intelligence, which can adjust a multi-screen display mode according to display contents.

In order to solve the above technical problem, the embodiment of the present invention adopts a technical solution that: provided is a screen control method based on artificial intelligence, including:

acquiring rendering data to be displayed in a preset target video memory;

performing imaging processing on the rendering data to generate a target frame image, and performing binarization processing on the target frame image to generate a binary frame image;

inputting the binary frame map into a preset screen control model, wherein the screen control model is a neural network model which is trained to a convergence state in advance and used for classifying the number of screens displaying the rendering data according to the binary frame map;

reading the number of screens output by the screen control model, and constructing a display cluster according to the number of the screens, wherein the display cluster comprises a main display and at least one auxiliary display;

and distributing the rendering data to the main display and at least one auxiliary display for split screen display.

Optionally, the reading the number of screens output by the screen control model, and constructing a display cluster according to the number of screens includes:

calling the online state information of a plurality of display screens according to a preset calling interface;

screening at least two target display screens from the plurality of display screens based on the online status information, wherein the target display screens are display screens which are characterized as being corresponding to the online status information;

and constructing the display cluster in the at least two target display screens according to the screen number.

Optionally, the constructing the display cluster in the at least two target display screens according to the screen number includes:

lighting the at least two target display screens to enable the at least two target display screens to be in a light-emitting state;

collecting a set image of the plurality of display screens when the at least two target display screens are in a light-emitting state;

extracting a display image formed by the at least two target display screens in the set image;

preprocessing the display image, and determining a central point of the preprocessed display image based on a preset central determination rule;

determining the main display based on the center point and selecting the at least one secondary display around the main display according to the main display and the number of screens.

Optionally, the preprocessing the display image and determining the center point of the preprocessed display image based on a preset center determination rule include:

acquiring relative position information between a target user and the plurality of display screens;

performing punctiform processing on a target display screen in the display image by taking the display screen as a unit to generate a punctiform graph of the display image;

marking the position mark of the target user in the dot diagram according to the relative position information;

inputting the dot diagram carrying the position mark into a preset central point recognition model, wherein the central point recognition model is a neural network model which is trained to a convergence state in advance and used for recognizing the central point of the image;

and reading the central point of the display image output by the central point identification model.

Optionally, before distributing the rendering data to the main display and the at least one sub-display for split-screen display, the method includes:

acquiring parameter information of the at least one auxiliary display, wherein the parameter information comprises position coordinates of the auxiliary display and a first screen size;

constructing at least one application window according to the first screen size, wherein the size of the application window is the same as the first screen size;

and correspondingly delivering the at least one application window into the at least one secondary display based on the position coordinates.

Optionally, the obtaining of the parameter information of the at least one sub-display includes:

acquiring resolution information of the main display and direction information between the at least one auxiliary display;

determining the at least one secondary display position coordinate according to the resolution information and the direction information;

generating parameter information of the at least one sub-display according to the position coordinates and the first screen size.

Optionally, the distributing the rendering data to the main display and at least one sub-display for split display includes:

acquiring a second screen size of the main display;

segmenting the rendering data according to a first screen size and the second screen size to generate first display data and at least one second display data;

and sending the first display data to the main display, and distributing the at least one second display data to the at least one application window for display.

In order to solve the above technical problem, an embodiment of the present invention further provides a screen control device based on artificial intelligence, including:

the acquisition module is used for acquiring rendering data to be displayed in a preset target video memory;

the processing module is used for carrying out imaging processing on the rendering data to generate a target frame image and carrying out binarization processing on the target frame image to generate a binary frame image;

the classification module is used for inputting the binary frame map into a preset screen control model, wherein the screen control model is a neural network model which is trained to be in a convergence state in advance and is used for classifying the number of screens displaying the rendering data according to the binary frame map;

the reading module is used for reading the number of screens output by the screen control model and constructing a display cluster according to the number of the screens, wherein the display cluster comprises a main display and at least one auxiliary display;

and the execution module is used for distributing the rendering data to the main display and at least one auxiliary display for split screen display.

Optionally, the screen control device based on artificial intelligence further includes:

the first calling submodule is used for calling the online state information of the plurality of display screens according to a preset calling interface;

the first screening submodule is used for screening at least two target display screens from the plurality of display screens on the basis of the online state information, wherein the target display screens are display screens which are characterized by being corresponding to the online state information;

a first constructing submodule, configured to construct the display cluster in the at least two target display screens according to the screen number.

Optionally, the screen control device based on artificial intelligence further includes:

the first control submodule is used for lightening the at least two target display screens to enable the at least two target display screens to be in a light-emitting state;

the first acquisition sub-module is used for acquiring the collective images of the plurality of display screens when the at least two target display screens are in a luminous state;

the first extraction submodule is used for extracting a display image formed by the at least two target display screens in the set image;

the first processing submodule is used for preprocessing the display image, determining a central point of the preprocessed display image based on a preset central determination rule, and determining a central point of the preprocessed display image;

a first execution sub-module for determining the main display based on the center point and selecting the at least one sub-display around the main display according to the main display and the number of screens.

Optionally, the screen control device based on artificial intelligence further includes:

the first obtaining sub-module is used for obtaining relative position information between a target user and the plurality of display screens;

the second processing submodule is used for carrying out punctiform processing on a target display screen in the display image by taking the display screen as a unit so as to generate a punctiform graph of the display image;

the first labeling submodule is used for labeling the position mark of the target user in the dot diagram according to the relative position information;

the first classification submodule is used for inputting the dot diagram carrying the position mark into a preset central point recognition model, wherein the central point recognition model is a neural network model which is trained to be in a convergence state in advance and is used for recognizing the central point of the image;

and the first reading submodule is used for reading the central point of the display image output by the central point identification model.

Optionally, the screen control device based on artificial intelligence further includes:

the second obtaining sub-module is used for obtaining parameter information of the at least one auxiliary display, wherein the parameter information comprises position coordinates of the auxiliary display and a first screen size;

a third processing submodule, configured to construct at least one application window according to the first screen size, where the size of the application window is the same as the first screen size;

and the first delivery sub-module is used for correspondingly delivering the at least one application window to the at least one auxiliary display based on the position coordinates.

Optionally, the screen control device based on artificial intelligence further includes:

a third obtaining sub-module, configured to obtain resolution information of the main display and direction information between the at least one sub-display;

a fourth processing submodule, configured to determine the position coordinate of the at least one sub-display according to the resolution information and the direction information;

and the second execution sub-module is used for generating parameter information of the at least one auxiliary display according to the position coordinates and the first screen size.

Optionally, the screen control device based on artificial intelligence further includes:

a fourth obtaining sub-module, configured to obtain a second screen size of the main display;

the fifth processing submodule is used for segmenting the rendering data according to the first screen size and the second screen size to generate first display data and at least one piece of second display data;

and the third execution sub-module is used for sending the first display data to the main display and distributing the at least one second display data to the at least one application window for display.

In order to solve the above technical problem, an embodiment of the present invention further provides a computer device, including a memory and a processor, where the memory stores computer-readable instructions, and the computer-readable instructions, when executed by the processor, cause the processor to execute the steps of the artificial intelligence based screen control method.

To solve the above technical problem, an embodiment of the present invention further provides a storage medium storing computer-readable instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the artificial intelligence based screen control method.

The embodiment of the invention has the beneficial effects that: the method comprises the steps of carrying out binarization processing on rendering data to be displayed, reducing the information content contained in the rendering data, further reducing the data volume of subsequent processing, and then carrying out model classification on a binary frame image. The split-screen display strategy can be rapidly switched according to the real-time display requirement, manual frequent switching of the split-screen display strategy is avoided, and the applicability and the utilization rate of split-screen display are improved.

Drawings

The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:

FIG. 1 is a schematic diagram of a basic flow chart of an artificial intelligence based screen control method according to an embodiment of the present application;

FIG. 2 is a schematic flow chart illustrating a process of screening online status display screens to form a display cluster according to an embodiment of the present application;

FIG. 3 is a flow chart illustrating the screening of main displays and sub-displays according to an embodiment of the present application;

FIG. 4 is a schematic flow chart of intent scoring in accordance with one embodiment of the present application;

FIG. 5 is a flowchart illustrating the process of creating a sub-display application window according to an embodiment of the present application;

FIG. 6 is a schematic view of a process for collecting sub-display parameter information according to an embodiment of the present application;

fig. 7 is a schematic flowchart of performing a split-screen display on rendering data according to an embodiment of the present application;

FIG. 8 is a schematic diagram of a basic structure of an artificial intelligence-based screen control device according to an embodiment of the present application;

fig. 9 is a block diagram of a basic structure of a computer device according to an embodiment of the present application.

Detailed Description

Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.

As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.

As used herein, a "terminal" includes both devices that are wireless signal receivers, devices that have only wireless signal receivers without transmit capability, and devices that have receive and transmit hardware, devices that have receive and transmit hardware capable of performing two-way communication over a two-way communication link, as will be understood by those skilled in the art. Such a device may include: a cellular or other communication device having a single line display or a multi-line display or a cellular or other communication device without a multi-line display; PCS (Personal Communications Service), which may combine voice, data processing, facsimile and/or data communication capabilities; a PDA (Personal Digital Assistant), which may include a radio frequency receiver, a pager, internet/intranet access, a web browser, a notepad, a calendar and/or a GPS (Global Positioning System) receiver; a conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, a "terminal" may be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or situated and/or configured to operate locally and/or in a distributed fashion at any other location(s) on earth and/or in space. The "terminal" used herein may also be a communication terminal, a web-enabled terminal, a music/video playing terminal, such as a PDA, an MID (Mobile Internet Device) and/or a Mobile phone with music/video playing function, and may also be a smart tv, a set-top box, etc.

Referring to fig. 1, fig. 1 is a schematic view illustrating a basic flow of the screen control method based on artificial intelligence according to the present embodiment. As shown in fig. 1, a screen control method based on artificial intelligence includes:

s1100, collecting rendering data to be displayed in a preset target video memory;

when the data is displayed, the target video memory renders the data to be displayed, the rendered data is stored in the target video memory in a frame image form and waits to be sent to the display for display, and at the moment, the frame image in the target video memory is read as the rendered data to be displayed.

The content of the rendering data can be a still displayed image or a continuously played video frame image.

S1200, performing imaging processing on the rendering data to generate a target frame image, and performing binarization processing on the target frame image to generate a binary frame image;

the read rendering data is an array matrix composed of pixel values, and the pixel values are converted into pixel points with corresponding colors according to the numerical information of each pixel value in the data matrix, so that the imaging processing of the rendering data is completed, and a target frame image of the rendering data is generated.

And carrying out binarization processing on the target frame image, setting a binary threshold value, comparing the pixel value of a pixel point in the target frame image with the binary threshold value, modifying the pixel point with the pixel value being more than or equal to the binary threshold value into 255 according to a comparison result, and modifying the pixel point with the pixel value being less than the binary threshold value into 0. After the modification is completed, the binarization processing of the target frame image is completed, and a binary frame image of the rendering data is generated.

S1300, inputting the binary frame map into a preset screen control model, wherein the screen control model is a neural network model which is trained to a convergence state in advance and used for classifying the number of screens displaying the rendering data according to the binary frame map;

and inputting the binary frame image into a preset screen control model, wherein the screen control model is a neural network model which is trained to a convergence state in advance and used for classifying the number of screens displaying the rendering data according to the binary frame image.

In this embodiment, the screen control model is obtained by training a convolutional neural network model, a deep neural network model, a cyclic neural network model, or any variation model of the above models.

The screen control model is trained in a supervision training mode, a large amount of sample data is collected and marked, and the number of screens for displaying the sample data is marked according to the display requirement of the sample data. Then, the initial model is supervised and trained through the marked samples, and when the training times reach the preset times or the classification accuracy reaches the set threshold, the initial model is trained to be convergent, so that the initial model becomes a screen control model.

In some embodiments, the screen control model can classify the display direction of the rendering data in addition to the number of the rendering data display screens, for example, the classification result is horizontal single screen, horizontal double screen, horizontal three screen, vertical single screen, vertical double screen, vertical three screen, or horizontal or vertical more screens. The screen control model in this embodiment is also trained in a supervised training mode, and the labels of the sample data include the display direction and the screen number during training, so that the trained screen control model has the capability of classifying the display direction and the screen number.

Because the new factors for determining the display quantity or the display direction of the screen depend on the size of the image formed by the rendering data and the length-width ratio of the image, and the relevance between the size of the image and the content of the image is small, the target frame image is converted into a binary frame image and then input into the screen control model for classification, the information content of the target frame image can be greatly reduced, and the operational efficiency of the model is improved. And the binary frame image weakens the display content, and reversely displays the information quantity of the image size, length-width ratio and other decisive factors, thereby being beneficial to improving the training speed and the classification accuracy of the screen control model.

S1400, reading the number of screens output by the screen control model, and constructing a display cluster according to the number of the screens, wherein the display cluster comprises a main display and at least one auxiliary display;

and when the screen control model classifies the input binary frame images, outputting corresponding classification results, wherein the quantity of the representations of the classification results is the quantity of screens required by the display rendering data, and constructing display clusters for displaying the quantity of the screens according to the quantity of the screens.

The display cluster includes a main display and a sub-display. However, the number of sub-displays is not limited thereto, and the number of sub-displays can be 2, 3, 4 or more in some embodiments, depending on the specific application scenario.

Selection of the primary display typically selects the first display in the display orientation to be the primary display, e.g., the first display from left to right or the first display from top to bottom to be the primary display. However, the selection of the main display is not limited to this, and in some embodiments, the selection of the main display is: and selecting the display screen in the direction opposite to the visual angle of the user as a main display, wherein the auxiliary display surrounds the main display.

And S1500, distributing the rendering data to the main display and at least one auxiliary display for split screen display.

And after the display cluster is determined, the rendering data is sent to the display cluster, and the main display and the auxiliary display are used for synchronous display.

When split-screen display is carried out, the main program needs to construct an application window at the position of the auxiliary display, the construction mode is to construct a coordinate system by taking the upper left corner of the main display as a zero point, and the resolution of the main display is taken as a boundary. For example, when the resolution of the main display is 640 × 480, the starting point of the application window of the first sub-display located on the right side of the main display is (480, 0), the application window with the same size as the resolution of the sub-display is constructed from the starting point, if there is another sub-display on the right side of the first display, the starting point of the second sub-display is (the abscissa end point of the first display, 0), and so on, the application windows of the plurality of sub-displays are constructed.

When the auxiliary display is positioned on other relative positions of the main display, a coordinate system is constructed by taking a first pixel point at the vertex position on the upper left side of the main display as a starting point, an application window is constructed by the positions of the auxiliary display and the main display, and the starting point of the application window is the resolution ratio of the auxiliary display on the upper left side of the auxiliary display.

And after the main program constructs an application window of the auxiliary display, dividing the display data according to the screen sizes of the main display and the auxiliary display, wherein the dividing mode is the proportion of the main display and the auxiliary display along the display direction. For example, when the display direction of the display cluster is displayed from left to right, the division ratio is the transverse width ratio of the main display and the sub display, and when the transverse width ratio of the main display and the sub display is 1:1, the rendering data is divided into two halves; when the lateral width ratio of the main display and the sub display is 4:6, the rendering data is divided according to the 4:6 ratio. When the display direction of the display cluster is displayed from top to bottom, the division ratio is the longitudinal width ratio of the main display and the auxiliary display.

And after the rendering data are distributed, the main display and the sub-display of the rendering data are received, the respective rendering data are displayed, and the split-screen display of the rendering data is completed.

In the above embodiment, the rendering data to be displayed is subjected to binarization processing to reduce the information amount contained in the rendering data, and then the data amount of the subsequent processing is reduced, and then the binary frame images are subjected to model classification. The split-screen display strategy can be rapidly switched according to the real-time display requirement, manual frequent switching of the split-screen display strategy is avoided, and the applicability and the utilization rate of split-screen display are improved.

In some embodiments, when a component displays a cluster, it may be desirable to screen out display screens that are online. Referring to fig. 2, fig. 2 is a schematic flow chart illustrating the process of screening the online status display screen to form the display cluster according to the present embodiment.

As shown in fig. 2. S1400 comprises:

s1411, calling the online state information of a plurality of display screens according to a preset calling interface;

when the host computer is connected with a plurality of display screens, some display screens are in a power-off and offline state and cannot be used as a display for rendering data. Therefore, when the display cluster is constructed, the online status of each display screen needs to be detected.

The detection mode is as follows: calling information of a reserved port of each display screen, calling port data of each display screen through the port calling information, and when a port data return value is 0, indicating that the display screens are not on line; when the port data return value is 1, it indicates that the display screen is on-line.

S1412, screening at least two target display screens from the plurality of display screens based on the online state information, wherein the target display screens are display screens represented as yes corresponding to the online state information;

and screening a preset number of target display screens from a plurality of display screens which are characterized to be online according to the online state information. The target display screen refers to a display screen which is characterized by the fact that the corresponding online state information is yes, namely the target display screen is a display screen with a port data return value of 1. Since both the main display and the sub-display need to be determined from the target display screens, the number of the target display screens is at least 2.

S1413, constructing the display cluster in the at least two target display screens according to the screen number.

And screening out the displays for constructing the display cluster from at least 2 target display screens according to the number of screens output by the screen control model.

The display cluster includes a main display and a sub-display. However, the number of sub-displays is not limited thereto, and the number of sub-displays can be 2, 3, 4 or more in some embodiments, depending on the specific application scenario.

Selection of the primary display typically selects the first display in the display orientation to be the primary display, e.g., the first display from left to right or the first display from top to bottom to be the primary display. However, the selection of the main display is not limited to this, and in some embodiments, the selection of the main display is: and selecting the display screen in the direction opposite to the visual angle of the user as a main display, wherein the auxiliary display surrounds the main display.

In some embodiments, after the target display screen is determined, the displays used to form the display cluster need to be screened through the pattern formed by the target display screen. Referring to fig. 3, fig. 3 is a schematic flow chart illustrating the screening of the main display and the sub-display according to the present embodiment.

As shown in fig. 3, S1413 includes:

s1421, lighting the at least two target display screens to enable the at least two target display screens to be in a light-emitting state;

after the target display screen is obtained through screening, the difference between the target display screen and other display screens needs to be distinguished, and therefore the target display screen needs to be displayed in a distinguishing manner. Because other display screens are not on-line, the screens have no display content and are in a long black state, and at the moment, the main program controls the target display screen to start so that the target display screen is in a luminous state.

In some embodiments, other display screens may be in use, and in order to distinguish the target display screen from the other display screens, the lighting color of the target display screen needs to be controlled, the lighting color can be controlled by reading the display images of the other display screens and inputting the display images into a color filter model which is pre-trained to be in a convergence state, and the color filter model is trained to generate a color classification with a significant color difference value from the input image according to the background color of the input image. Because the color screening model is also a neural network model trained to be convergent by supervision, the display colors of the target display screen can be obtained by classification according to the display images of other display screens, and then the main program controls the target display screen to uniformly display the classified colors, so that the brightness display of the target display screen is more obvious.

S1422, collecting a set image of the plurality of display screens when the at least two target display screens are in a light-emitting state;

when the target display screen is in a luminous state, image acquisition is carried out on a plurality of display screens including the target display screen, and the image acquisition mode is as follows: and shooting the display screen through the arranged camera to obtain a set image.

In some embodiments, the collection of the collective image can further be: the coordinate information and the display content of each display screen are collected, and the display content of each display screen is spliced based on the coordinate information to generate a set image.

S1423, extracting a display image formed by the at least two target display screens in the set image;

and extracting a display image composed of the target display screen in the set image, wherein the image extraction mode can be as follows: and establishing an image selection area in the collective image by taking the display content color of the target display screen as a standard, and then cutting an image area except the image selection area to obtain a display image consisting of the target display screen.

In some embodiments, when the colors of the other display screens are all black, an image selection area is established in the collective image by taking black as a standard, and then the established selection area is cut out from the collective image, so that the obtained image is the display image composed of the target display screen.

S1424, preprocessing the display image, determining a center point of the preprocessed display image based on a preset center determination rule;

preprocessing a display image, wherein the preprocessing mode is as follows: and performing point processing on the image of each target display screen in the display image by taking the screen area of the target display screen as a unit to generate a point graph of the target display screen. The image of the target display screen is processed in a punctiform way, so that the generated punctiform graph keeps the relative position relation of each target display image, but the contained data volume is greatly reduced, and the subsequent processing efficiency of the display image is favorably improved.

After a dot graph of a target display screen is determined, a central point in the dot graph is determined based on the dot graph, and the determination mode of the central point comprises the following steps: when the pattern formed by the target display screen represented by the dot diagram is a regular pattern, such as a circle, a triangle, a square, a rectangle, a regular polygon or other regular patterns, the center point of the corresponding regular pattern is called to determine a rule, and the center point of the dot diagram is generated.

In some embodiments, the center point is determined by marking the area facing the user's view as the center point according to the orientation of the user. After the display image is converted into a dot graph, according to the relative position relation between the user and the display screen, the relative position information of the user is marked in the dot graph in a dot mark mode, but the position mark of the user is different from the dot image pixel of the target display screen or in a dot shape. The dot-shaped graph marked with the user position information is input into a preset central point recognition model, the central point recognition model is a neural network model obtained through supervision training, and the central point, which is opposite to the user, in the dot-shaped graph can be obtained through classification according to the position of the user in the dot-shaped graph.

S1425, determining the main display based on the central point, and selecting the at least one secondary display around the main display according to the main display and the screen number.

According to the mapping relation between the dot diagram and the display image, the position of the central point in the display image can be determined according to the position of the central point in the dot diagram, and then the target display screen corresponding to the central point is determined according to the mapping relation between the display image and the target display screen. The target display screen corresponding to the center point is the main display in the display cluster. And after the main display is selected, sequentially selecting the auxiliary displays clockwise by taking the adjacent display screen on the left side of the main display as a starting point according to the number of screens, and enabling the auxiliary displays to surround the main display. Therefore, a plurality of sub-displays surrounding the main display are screened out in a layered progressive mode, and the main display and the sub-displays jointly form a display cluster.

The target display screen is lightened, so that display differences exist between the target display screen and other display screens, the display screens are subjected to imaging based on the display differences, the display images generated by the target display screen set are extracted according to the set images after imaging, the central point of the display images is extracted, the position of the central point can be quickly determined from a plurality of target display screens, and the user can have better visual experience based on the main display determined by the position of the central point.

In some embodiments, the identification of the center point is identified by the user's location, and the center point is determined by marking the user's location information in a dot plot. Referring to fig. 4, fig. 4 is a schematic flow chart illustrating the determination of the center point according to the present embodiment.

As shown in fig. 4, S1424 includes:

s1431, obtaining relative position information between the target user and the plurality of display screens;

after the display image is converted into a dot graph, according to the relative position relation between the user and the display screen, the relative position information of the user is marked in the dot graph in a dot mark mode, but the position mark of the user is different from the dot image pixel of the target display screen or in a dot shape.

The relative position information between the target user and the plurality of display screens can be calculated by a position sensor and a distance sensor which are fixed near or worn on the user. However, the manner of acquiring the relative position information is not limited to this, and in some embodiments, the relative position information can be obtained by photographing a panoramic image between the display screen and the target user and then processing the panoramic image according to an image recognition technique.

S1432, performing a dotting process on a target display screen in the display image by taking the display screen as a unit, and generating a dot diagram of the display image;

preprocessing a display image, wherein the preprocessing mode is as follows: and performing point processing on the image of each target display screen in the display image by taking the screen area of the target display screen as a unit to generate a point graph of the target display screen. The image of the target display screen is processed in a punctiform way, so that the generated punctiform graph keeps the relative position relation of each target display image, but the contained data volume is greatly reduced, and the subsequent processing efficiency of the display image is favorably improved.

S1433, marking a position mark of the target user in the dot diagram according to the relative position information;

after the display image is converted into a dot graph, according to the relative position relation between the user and the display screen, marking the position mark of the user in the dot graph, wherein the marking mode of the position mark is a dot mark, but the position mark of the user is different from the dot image pixel of the target display screen or the dot shape.

S1434, inputting the dot diagram carrying the position mark into a preset central point recognition model, wherein the central point recognition model is a neural network model which is trained to be in a convergence state in advance and used for recognizing the central point of the image;

the dot-shaped graph marked with the user position information is input into a preset central point recognition model, the central point recognition model is a neural network model obtained through supervision training, and the central point, which is opposite to the user, in the dot-shaped graph can be obtained through classification according to the position of the user in the dot-shaped graph.

The central point identification model is trained in a supervision training mode, a large amount of sample data is collected and marked, and the position of the central point in the number of the sample is marked according to the training requirement of the sample data. Then, the initial model is supervised and trained through the labeled samples, and when the training times reach the preset times or the classification accuracy reaches the set threshold, the initial model is trained to be converged, so that the initial model becomes the central point recognition model.

And S1435, reading the central point of the display image output by the central point identification model.

After the point graph carrying the position mark is classified by the central point identification model, the central point of the point graph is obtained, and then the central point of the display image is obtained according to the mapping relation between the point graph and the display image.

The display image is converted into the dot diagram, then the position mark of the user is marked in the dot diagram, and finally the image classification is carried out on the dot diagram through the neural network to obtain the central point of the display image, so that the extraction efficiency of the central point and the later visual sense of the user are improved.

In some embodiments, when a host performs dual-screen or multi-screen display, an application window of a secondary display needs to be constructed, and the application window will serve as a virtual display container displayed by the secondary display to receive display contents assigned by a main program. Referring to fig. 5, fig. 5 is a schematic flow chart illustrating the process of establishing the sub-display application window according to the present embodiment.

As shown in fig. 5, before S1500, the method includes:

s1441, acquiring parameter information of the at least one secondary display, wherein the parameter information includes position coordinates of the secondary display and a first screen size;

and acquiring parameter information of each sub-display, wherein the parameter information comprises the position coordinate and the first screen size of each sub-display.

The position coordinates are used for determining the positions of the sub-displays, and the position coordinates are relative position coordinates taking the main display as a reference object. For example, the position coordinates of the main display are constructed as (0,0), and the sub-displays sequentially establish corresponding position coordinates according to the position relationship with the main display.

The first screen size of the sub-display means a screen resolution of each sub-display, and the screen resolution can most intuitively reflect size information of each sub-display.

S1442, constructing at least one application window according to the first screen size, wherein the size of the application window is the same as that of the first screen size;

the main program needs to construct an application window at the position of the auxiliary display, and the construction mode is to construct a coordinate system by taking the upper left corner of the main display as a zero point and taking the resolution of the main display as a boundary. For example, when the resolution of the main display is 640 × 480, the starting point of the application window of the first sub-display located on the right side of the main display is (480, 0), the application window with the same size as the resolution of the sub-display is constructed from the starting point, if there is another sub-display on the right side of the first display, the starting point of the second sub-display is (the abscissa end point of the first display, 0), and so on, the application windows of the plurality of sub-displays are constructed.

When the auxiliary display is positioned on other relative positions of the main display, a coordinate system is constructed by taking a first pixel point at the vertex position on the upper left side of the main display as a starting point, an application window is constructed by the positions of the auxiliary display and the main display, and the starting point of the application window is the resolution ratio of the auxiliary display on the upper left side of the auxiliary display.

S1443, correspondingly delivering the at least one application window to the at least one sub-display based on the position coordinates.

And associating the position coordinates of each sub-display with the corresponding application window, then calling the application window of each sub-display based on the position coordinates, and routing the application window into each sub-display to complete the arrangement of the sub-display virtual container.

By arranging the virtual containers for each auxiliary display, the division and synchronous display control of the rendering data by the main program can be facilitated, and the display efficiency and the control efficiency are improved.

In some embodiments, when extracting the parameter information of the sub-display, the relative positional relationship between the sub-display and the main display is required to determine the parameter information of each sub-display. Referring to fig. 6, fig. 6 is a schematic view illustrating a flow of collecting parameter information of the sub-display according to the present embodiment.

As shown in fig. 6, S1441 includes:

s1451, acquiring resolution information of the main display and direction information between the at least one auxiliary display;

the position coordinates are used for determining the position of each sub-display, and the position coordinates are relative position coordinates taking the main display as a reference object. The direction information of the sub-display relative to the main display is determined according to the position relationship between the sub-display and the main display, for example, if the sub-display is located on the left side of the main display, the direction information of the sub-display and the main display is west, if the sub-display is located above the main display, the direction information of the sub-display and the main display is north, and so on, the direction information of each sub-display is obtained.

And after the direction information of each sub-display is obtained, pulling the resolution information of each sub-display through a preset port.

S1452, determining the at least one auxiliary display position coordinate according to the resolution information and the direction information;

after the resolution information and the direction information of each sub-display are obtained, a coordinate system is constructed by taking the upper left corner of the main display as a zero point, and the resolution of the main display is taken as a boundary. For example, when the resolution of the main display is 640 × 480, the position coordinate of the first sub-display located on the right side of the main display is (480, 0), if there is another sub-display on the right side of the first display, the starting point of the second sub-display is (the abscissa end point of the first display, 0), and so on, the application windows of the plurality of sub-displays are constructed.

And S1453, generating parameter information of the at least one sub-display according to the position coordinates and the first screen size.

Generating parameter information of at least one sub-display according to the position coordinates and the first screen size. Since the first screen size is the resolution of the sub-display, the parameter information of the sub-display can be configured according to the position coordinates and the resolution of the sub-display.

In some embodiments, when the main display and the sub display the rendering data, the rendering data needs to be split so that the main display and the sub display can display the rendering data in a split screen manner. Referring to fig. 7, fig. 7 is a schematic flowchart illustrating a process of performing a split-screen display on rendering data according to the present embodiment.

As shown in fig. 7, S1500 includes:

s1511, acquiring a second screen size of the main display;

and acquiring a second screen size of the main display, wherein the second screen size is the resolution of the main display, and the second screen size of the main display is acquired by reading the preset port.

S1512, segmenting the rendering data according to the first screen size and the second screen size to generate first display data and at least one second display data;

and segmenting the rendering data according to the first screen size and the second screen size in a manner of segmenting the rendering data according to the coordinates mainly displayed in the rendering data and the second screen size to obtain first display data. And then, according to the position relation between each secondary display and the main display and the first screen size of each secondary display, dividing the residual data of the rendering data to obtain second display data of each secondary display.

S1513, sending the first display data to the main display, and distributing the at least one second display data to the at least one application window for display.

And after the rendering data are segmented, the main program sends the first display data to the main display for displaying, and correspondingly sends each second display data to each application window for displaying. And the display cluster formed by the main display and the auxiliary display synchronously displays the display data respectively responsible for the main display and the auxiliary display, and the split-screen display of the rendering data is completed.

Referring to fig. 8, fig. 8 is a schematic diagram of a basic structure of the screen control device based on artificial intelligence according to the present embodiment.

As shown in fig. 8, an artificial intelligence based screen control apparatus includes: an acquisition module 1100, a processing module 1200, a classification module 1300, a reading module 1400, and an execution module 1500. The acquisition module 1100 is configured to acquire rendering data to be displayed in a preset target video memory; the processing module 1200 is configured to perform imaging processing on the rendering data to generate a target frame map, and perform binarization processing on the target frame map to generate a binary frame map; the classification module 1300 is configured to input the binary frame map into a preset screen control model, where the screen control model is a neural network model trained in advance to a convergence state and configured to classify the number of screens displaying the rendering data according to the binary frame map; the reading module 1400 is configured to read the number of screens output by the screen control model, and construct a display cluster according to the number of screens, where the display cluster includes a main display and at least one sub-display; the execution module 1500 is configured to distribute the rendering data to the main display and the at least one sub-display for split-screen display.

The screen control device based on artificial intelligence carries out binarization processing on rendering data to be displayed, reduces the information content contained in the rendering data, and then carries out model classification on the binary frame images after reducing the data volume of subsequent processing. The split-screen display strategy can be rapidly switched according to the real-time display requirement, manual frequent switching of the split-screen display strategy is avoided, and the applicability and the utilization rate of split-screen display are improved.

In some embodiments, the artificial intelligence based screen control apparatus further comprises: the system comprises a first calling submodule, a first screening submodule and a first constructing submodule. The first calling submodule is used for calling the online state information of the plurality of display screens according to a preset calling interface; the first screening submodule is used for screening at least two target display screens from the plurality of display screens on the basis of the online state information, wherein the target display screens are display screens which are characterized by corresponding online state information; the first constructing submodule is used for constructing the display cluster in the at least two target display screens according to the screen number.

In some embodiments, the artificial intelligence based screen control apparatus further comprises: the device comprises a first control submodule, a first acquisition submodule, a first extraction submodule, a first processing submodule and a first execution submodule. The first control submodule is used for lightening the at least two target display screens to enable the at least two target display screens to be in a light-emitting state; the first acquisition sub-module is used for acquiring the collective images of the plurality of display screens when the at least two target display screens are in a luminous state; the first extraction submodule is used for extracting a display image formed by the at least two target display screens in the set image; the first processing submodule is used for preprocessing the display image, determining a central point of the preprocessed display image based on a preset central determination rule and determining a central point of the preprocessed display image; the first execution sub-module is configured to determine the main display based on the center point and select the at least one sub-display around the main display according to the main display and the number of screens.

In some embodiments, the artificial intelligence based screen control apparatus further comprises: the device comprises a first obtaining submodule, a second processing submodule, a first labeling submodule, a first classification submodule and a first reading submodule. The first obtaining submodule is used for obtaining relative position information between a target user and the plurality of display screens; the second processing submodule is used for carrying out punctiform processing on a target display screen in the display image by taking the display screen as a unit to generate a punctiform graph of the display image; the first labeling submodule is used for labeling the position mark of the target user in the dot diagram according to the relative position information; the first classification submodule is used for inputting the dot diagram carrying the position mark into a preset central point recognition model, wherein the central point recognition model is a neural network model which is trained to be in a convergence state in advance and is used for recognizing the central point of the image; the first reading submodule is used for reading the central point of the display image output by the central point identification model.

In some embodiments, the artificial intelligence based screen control apparatus further comprises: the system comprises a second obtaining submodule, a third processing submodule and a first delivery submodule. The second obtaining submodule is used for obtaining parameter information of the at least one auxiliary display, wherein the parameter information comprises position coordinates of the auxiliary display and a first screen size; the third processing submodule is used for constructing at least one application window according to the first screen size, wherein the size of the application window is the same as the first screen size; the first delivery sub-module is used for correspondingly delivering the at least one application window to the at least one auxiliary display based on the position coordinates.

In some embodiments, the artificial intelligence based screen control apparatus further comprises: a third obtaining submodule, a fourth processing submodule and a second executing submodule. The third obtaining sub-module is used for obtaining the resolution information of the main display and the direction information between the at least one sub-display; the fourth processing submodule is used for determining the position coordinates of the at least one auxiliary display according to the resolution information and the direction information; the second execution submodule is used for generating parameter information of the at least one auxiliary display according to the position coordinates and the first screen size.

In some embodiments, the artificial intelligence based screen control apparatus further comprises: a fourth obtaining submodule, a fifth processing submodule and a third executing submodule. The fourth obtaining sub-module is used for obtaining a second screen size of the main display; the fifth processing submodule is used for segmenting the rendering data according to the first screen size and the second screen size to generate first display data and at least one piece of second display data; the third execution sub-module is used for sending the first display data to the main display and distributing the at least one second display data to the at least one application window for display.

In order to solve the above technical problem, an embodiment of the present invention further provides a computer device. Referring to fig. 9, fig. 9 is a block diagram of a basic structure of a computer device according to the present embodiment.

As shown in fig. 9, the internal structure of the computer device is schematically illustrated. The computer device includes a processor, a non-volatile storage medium, a memory, and a network interface connected by a system bus. The non-volatile storage medium of the computer device stores an operating system, a database and computer readable instructions, the database can store control information sequences, and the computer readable instructions can enable the processor to realize an artificial intelligence-based screen control method when being executed by the processor. The processor of the computer device is used for providing calculation and control capability and supporting the operation of the whole computer device. The memory of the computer device may have stored therein computer readable instructions that, when executed by the processor, cause the processor to perform an artificial intelligence based screen control method. The network interface of the computer device is used for connecting and communicating with the terminal. Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.

In this embodiment, the processor is configured to execute specific functions of the acquisition module 1100, the processing module 1200, the classification module 1300, the reading module 1400, and the execution module 1500 in fig. 8, and the memory stores program codes and various data required for executing the modules. The network interface is used for data transmission to and from a user terminal or a server. The memory in this embodiment stores program codes and data required for executing all the sub-modules in the face image key point detection device, and the server can call the program codes and data of the server to execute the functions of all the sub-modules.

The computer equipment carries out binarization processing on rendering data to be displayed, reduces the information content in the rendering data, and then carries out model classification on the binary frame image after reducing the data volume of subsequent processing. The split-screen display strategy can be rapidly switched according to the real-time display requirement, manual frequent switching of the split-screen display strategy is avoided, and the applicability and the utilization rate of split-screen display are improved.

The present invention also provides a storage medium storing computer-readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of any of the above-described embodiments of the artificial intelligence based screen control method.

It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).

Those of skill in the art will appreciate that the various operations, methods, steps in the processes, acts, or solutions discussed in this application can be interchanged, modified, combined, or eliminated. Further, other steps, measures, or schemes in various operations, methods, or flows that have been discussed in this application can be alternated, altered, rearranged, broken down, combined, or deleted. Further, steps, measures, schemes in the prior art having various operations, methods, procedures disclosed in the present application may also be alternated, modified, rearranged, decomposed, combined, or deleted.

The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

25页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种显示终端及其级联系统、控制方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类