Interface generation method and equipment

文档序号:1694289 发布日期:2019-12-10 浏览:8次 中文

阅读说明:本技术 一种界面生成方法及设备 (Interface generation method and equipment ) 是由 高璋 于 2019-07-25 设计创作,主要内容包括:本申请实施例提供一种界面生成方法及设备,该方法应用于具有开发功能的设备中,可以提供一种自动调整视觉元素在待生成界面中的布局,快速生成界面的方法,该方法包括:设备先获取参考界面的视觉元素,以及获取目标终端设备的显示屏的配置信息,然后根据视觉元素的属性信息,确定视觉元素的视觉焦点。设备根据显示屏的配置信息,确定与配置信息对应的界面布局模板。最终设备根据视觉焦点和与界面布局模板,调整视觉元素在待生成界面中的布局,并生成界面。(the embodiment of the application provides an interface generation method and equipment, the method is applied to equipment with a development function, and the method can be used for automatically adjusting the layout of visual elements in an interface to be generated and quickly generating the interface, and comprises the following steps: the equipment firstly acquires the visual elements of the reference interface and the configuration information of the display screen of the target terminal equipment, and then determines the visual focus of the visual elements according to the attribute information of the visual elements. And the equipment determines an interface layout template corresponding to the configuration information according to the configuration information of the display screen. And finally, the equipment adjusts the layout of the visual elements in the interface to be generated according to the visual focus and the interface layout template, and generates the interface.)

1. An interface generation method is applied to equipment with development function, and is characterized by comprising the following steps:

Acquiring visual elements of a reference interface and acquiring configuration information of a display screen of target terminal equipment;

Determining a visual focus of the visual element according to the attribute information of the visual element;

determining an interface layout template corresponding to the configuration information according to the configuration information of the display screen;

And adjusting the layout of the visual elements in the interface to be generated according to the visual focus and the interface layout template, and generating the interface.

2. The method of claim 1, wherein the determining the visual focus of the visual element according to the attribute information of the visual element comprises:

Determining a central point of a reference interface formed by the visual elements as a first reference visual focus;

performing image content recognition on the visual elements, and determining a second reference visual focus corresponding to an image content recognition result;

determining a third reference visual focus corresponding to at least one of color information and picture type of the visual element;

And aggregating the first reference visual focus, the second reference visual focus and the third reference visual focus to obtain the visual focus of the visual element.

3. the method of claim 1 or 2, wherein adjusting the layout of the visual elements in the interface to be generated according to the visual focus and the interface layout template comprises:

According to the interface layout template, adjusting the position of the visual element in the interface to be generated and adjusting the position of a control in the interface to be generated;

And taking the visual focus as a central point of the interface to be generated, cutting and/or zooming the visual element, and adjusting the size of the control.

4. The method according to any one of claims 1 to 3, wherein the target terminal device supports a split screen function, and a display mode of the interface to be generated is a split screen mode, and comprises a first interface to be generated of a first application and a second interface to be generated of a second application;

Acquiring visual elements of a reference interface, comprising:

Acquiring a first visual element of a reference interface of the first application and a second visual element of a reference interface of the second application;

determining a visual focus of the visual element according to the attribute information of the visual element, including:

Determining a first visual focus of a first interface to be generated of the first application according to the attribute information of the first visual element, and determining a second visual focus of a second interface to be generated of the second application according to the attribute information of the second visual element;

the determining an interface layout template corresponding to the configuration information according to the configuration information of the display screen includes:

Determining an interface layout template of the interface to be generated in the split screen mode according to the configuration information of the display screen and the split screen mode;

The adjusting the layout of the visual elements in the interface to be generated according to the visual focus and the interface layout template and generating the interface comprises:

According to the interface layout template of the interface to be generated and the first visual focus in the split screen mode, adjusting the layout of the first visual element in the first interface to be generated, and generating the interface of the first application;

And adjusting the layout of the second visual element in the second interface to be generated according to the interface layout template of the interface to be generated and the second visual focus in the split screen mode, and generating the second corresponding interface.

5. the method of any of claims 1 to 4, wherein the configuration information of the display screen comprises at least one of: the shape of the display screen, the resolution of the display screen, whether touch is supported, whether folding is supported;

The interface layout template comprises at least one of the following contents: the interface display style, the layout parameters and the control response mode.

6. An apparatus comprising a processor and a memory;

The memory for storing one or more computer programs that, when executed by the processor, cause the apparatus to perform:

Acquiring visual elements of a reference interface and acquiring configuration information of a display screen of target terminal equipment;

determining a visual focus of the visual element according to the attribute information of the visual element;

determining an interface layout template corresponding to the configuration information according to the configuration information of the display screen;

and adjusting the layout of the visual elements in the interface to be generated according to the visual focus and the interface layout template, and generating the interface.

7. the apparatus of claim 6, wherein the one or more computer programs stored in the memory, when executed by the processor, cause the apparatus to perform in particular:

Determining a central point of a reference interface formed by the visual elements as a first reference visual focus;

Performing image content recognition on the visual elements, and determining a second reference visual focus corresponding to the image content recognition result;

Determining a third reference visual focus corresponding to at least one of color information and picture type of the visual element;

and aggregating the first reference visual focus, the second reference visual focus and the third reference visual focus to obtain the visual focus of the visual element.

8. The apparatus of claim 6 or 7, wherein the one or more computer programs stored in the memory, when executed by the processor, cause the apparatus to perform in particular:

According to the interface layout template, adjusting the position of the visual element in the interface to be generated and adjusting the position of a control in the interface to be generated;

And taking the visual focus as a central point of the interface to be generated, cutting and/or zooming the visual element, and adjusting the size of the control.

9. The device according to any one of claims 6 to 8, wherein the target terminal device supports a split screen function, and a display mode of the interface to be generated is a split screen mode, and includes a first interface to be generated of a first application and a second interface to be generated of a second application;

the one or more computer programs stored by the memory, when executed by the processor, cause the apparatus to perform, among other things:

Acquiring a first visual element of a reference interface of the first application and a second visual element of a reference interface of the second application;

Determining a first visual focus of a first interface to be generated of the first application according to the attribute information of the first visual element, and determining a second visual focus of a second interface to be generated of the second application according to the attribute information of the second visual element;

determining an interface layout template of the interface to be generated in the split screen mode according to the configuration information of the display screen and the split screen mode;

according to the interface layout template of the interface to be generated and the first visual focus in the split screen mode, adjusting the layout of the first visual element in the first interface to be generated, and generating the interface of the first application;

And adjusting the layout of the second visual element in the second interface to be generated according to the interface layout template of the interface to be generated and the second visual focus in the split screen mode, and generating the second corresponding interface.

10. The device according to any one of claims 6 to 9, wherein the configuration information of the display screen comprises at least one of: the shape of the display screen, the resolution of the display screen, whether touch is supported, whether folding is supported;

the interface layout template comprises at least one of the following contents: the interface display style, the layout parameters and the control response mode.

11. A computer-readable storage medium comprising computer instructions that, when executed on a device, cause the device to perform the interface generation method of any one of claims 1 to 5.

12. a chip coupled with a memory for executing a computer program stored in the memory to perform the interface generation method of any one of claims 1 to 5.

Technical Field

The present application relates to the field of terminal technologies, and in particular, to an interface generation method and device.

background

with the rapid development of network technologies, intelligent interaction devices such as smart phones, tablet computers, interactive smart tablets and the like are increasingly popularized, and great convenience is brought to life, learning and work of people. Due to the fact that the devices are different in resolution, for example, the screen resolutions of the smart phone and the tablet computer are different, the interface layouts of Graphical User Interfaces (GUIs) of the same application on the smart phone and the tablet computer are also different. In the prior art, when the GUI of each type of equipment is designed and developed, developers are relied on to artificially design visual elements according to needs, the development workload is large, and the development period is long.

Disclosure of Invention

the application provides an interface generation method and equipment, which are used for providing a method for automatically adjusting the layout of visual elements in an interface to be generated and quickly generating the interface.

in a first aspect, an embodiment of the present application provides an interface generation method, where the method is applicable to a device with a development function, and the method includes: the equipment firstly acquires the visual elements of the reference interface and the configuration information of the display screen of the target terminal equipment, and then determines the visual focus of the visual elements according to the attribute information of the visual elements. And the equipment determines an interface layout template corresponding to the configuration information according to the configuration information of the display screen. And finally, the equipment adjusts the layout of the visual elements in the interface to be generated according to the visual focus and the interface layout template, and generates the interface.

In the embodiment of the application, the device can automatically adjust the layout of the visual elements in the interface to be generated by using the visual elements of the existing interface, and the interface is quickly generated, so that the utilization rate of the visual elements can be improved, and the development period can be shortened.

In one possible design, the device determines a center point of a reference interface formed by the visual elements as a first reference visual focus; performing image content recognition on the visual elements, and determining a second reference visual focus corresponding to the image content recognition result; determining a third reference visual focus corresponding to at least one of color information and picture type of the visual element; and finally, aggregating the first reference visual focus, the second reference visual focus and the third reference visual focus to obtain the visual focus of the visual element.

In the embodiment of the application, the equipment can determine a better visual focus through the method, and is beneficial to improving the layout effect generated by the interface.

In one possible design, the device adjusts the position of the visual element in the interface to be generated and adjusts the position of the control in the interface to be generated according to the interface layout template; and then the equipment uses the visual focus as the central point of the interface to be generated, cuts and/or zooms the visual element, and adjusts the size of the control.

In the embodiment of the application, the equipment can generate the interface with good layout effect according to the method, so that the utilization rate of visual elements can be improved, and the development period can be shortened.

In one possible design, if the target terminal device supports the split screen function, the display mode of the interface to be generated is a split screen mode, and the display mode comprises a first interface to be generated of a first application and a second interface to be generated of a second application; the device may obtain a first visual element of a reference interface of the first application and a second visual element of a reference interface of the second application; then, according to the attribute information of the first visual element, a first visual focus of a first interface to be generated of the first application is determined, and according to the attribute information of the second visual element, a second visual focus of a second interface to be generated of the second application is determined; and determining an interface layout template of the interface to be generated in the split screen mode according to the configuration information of the display screen and the split screen mode. The final equipment adjusts the layout of the first visual element in the first interface to be generated according to the interface layout template and the first visual focus of the interface to be generated in the split screen mode, and generates a first application interface; and adjusting the layout of the second visual elements in the second interface to be generated according to the interface layout template of the interface to be generated in the split screen mode and the second visual focus, and generating a second corresponding interface.

In the embodiment of the application, the device can automatically adjust the layout of the visual elements in the interface to be generated by using the visual elements of the interface before screen splitting according to the method, and the interface is quickly generated, so that the utilization rate of the visual elements can be improved, and the development period can be shortened.

In one possible design, the configuration information of the display screen includes at least one of the following: the shape of the display screen, the resolution of the display screen, whether touch is supported, whether folding is supported; the interface layout template comprises at least one of the following: the interface display style, the layout parameters and the control response mode.

In a second aspect, an embodiment of the present application provides an apparatus, which includes a processor and a memory. Wherein the memory is used to store one or more computer programs; the one or more computer programs stored in the memory, when executed by the processor, enable the apparatus to implement any of the possible design approaches of any of the aspects described above.

In a third aspect, the present application further provides an apparatus including a module/unit for performing the method of any one of the possible designs of any one of the above aspects. These modules/units may be implemented by hardware, or by hardware executing corresponding software.

In a fourth aspect, this embodiment also provides a computer-readable storage medium, which includes a computer program and when the computer program runs on an apparatus, the apparatus is caused to perform any one of the possible design methods of any one of the above aspects.

In a fifth aspect, the present application further provides a method comprising a computer program product, when the computer program product runs on a terminal, causing the apparatus to perform any one of the possible designs of any one of the above aspects.

These and other aspects of the present application will be more readily apparent from the following description of the embodiments.

Drawings

Fig. 1 is a schematic diagram of various terminal devices provided in an embodiment of the present application;

Fig. 2 is a schematic diagram of a communication system according to an embodiment of the present application;

Fig. 3 is a schematic structural diagram of an apparatus provided in an embodiment of the present application;

fig. 4 is a schematic structural diagram of a terminal device according to an embodiment of the present application;

Fig. 5 is a schematic diagram of an interface generation method according to an embodiment of the present application;

fig. 6A to fig. 6D are schematic diagrams of an interface layout template according to an embodiment of the present application;

FIG. 7 is a schematic diagram of a visual element of a reference interface provided in an embodiment of the present application;

fig. 8a to 8b are schematic interface diagrams of a smart phone according to an embodiment of the present disclosure;

Fig. 9a to 9c and fig. 10a to 10c are schematic interface diagrams of a smart watch according to an embodiment of the present application;

fig. 11a to 11c are schematic diagrams illustrating an interface of a tablet computer according to an embodiment of the present disclosure;

Fig. 12a to 12c are schematic interface diagrams of an intelligent television according to an embodiment of the present application;

fig. 13a to 13c are schematic interface diagrams of a smart phone with a foldable touch screen according to an embodiment of the present application;

Fig. 14a to 14c are schematic interface diagrams of a smartphone supporting split-screen according to an embodiment of the present application;

Fig. 15a to 15c are schematic interface diagrams of various terminal devices according to an embodiment of the present disclosure;

fig. 16 is a schematic view of a terminal device structure interface provided in the embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments of the present application, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present application, "a plurality" means two or more unless otherwise specified.

for ease of understanding, examples are given in part to illustrate concepts related to embodiments of the present application.

Visual elements refer to image information visible to the naked eye of a user, such as buttons, pictures, text, and the like.

The visual focus refers to a position of a pixel point which can attract the attention of a user, and generally refers to important information in image information, such as a human face and the most attractive part in a landscape image.

the layout refers to the arrangement mode of the visual elements, the positions of the visual elements and the like, such as the top margin, the left margin, the bottom margin, the right margin, the width, the height and the like of the control.

With the popularization of mobile terminal devices such as smart phones and tablet computers, users are gradually accustomed to the internet surfing mode using Application (APP) client software, and at present, all domestic large electronic businesses have respective APP client software, which marks that the use of the APP client software begins to reveal the sharp initially. However, as the resolutions and screen sizes of smartphones and tablet computers are continuously developed, the resolutions and screen sizes of different models of devices may be different, for example, the resolutions and screen sizes of the smartphone 10, the smartwatch 20, the tablet computer 30, and the smart television 40 shown in fig. 1 are different. Therefore, the APP client software with good layout effect on the smart phone 10 may have a problem of unsatisfactory layout effect if directly installed and operated on the smart watch 20, such as an inconsistent length-width ratio of the interface or an unreasonable position of a control in the interface.

in order to solve the above problems, a common solution in the prior art is to redraw a set of image resources suitable for the smart watch by a User Interface (UI) engineer, revise codes of APP client software by a software developer in an Integrated Development Environment (IDE), and compile and package the revised codes and new image resources to achieve good layout of APP client software of the smart watch. The IDE is an application program used in a program development environment, and generally includes a code editor, a compiler, an adjuster, and a graphical UI tool, and as Xcode programming software on the iOS system and Android studio programming software on the Android system are typical IDEs. Therefore, the solution method depends on a UI engineer to artificially design visual elements according to needs, the development workload is large, the development period is long, and the image resource reuse rate of APP client software is low.

The method includes the steps that the equipment firstly obtains visual elements of a reference interface and an interface layout template of an interface to be generated, then the equipment determines visual focuses of the visual elements according to attribute information of the visual elements and configuration information of a display screen of the equipment, accordingly, the layout of the visual elements in the interface to be generated is determined according to the visual focuses and the interface layout template of the interface to be generated, and finally the interface with good layout effect is generated. In the embodiment of the application, the device can automatically adjust the layout of the visual elements in the interface to be generated by using the visual elements of the existing interface, and the interface is quickly generated, so that the utilization rate of the visual elements can be improved, and the development period can be shortened.

as shown in fig. 2, an embodiment of the present application provides a communication system, where the communication system includes a terminal device 100 and a device 200, and the terminal device 100 and the device 200 may be directly connected through a data line or may communicate with each other through a communication network. The device 200 is installed with an Integrated Development Environment (IDE) for compiling and generating a software package of the terminal device 100.

The communication network may be a local area network, a wide area network (wan) switched by a relay device, or a local area network and a wan. When the communication network is a local area network, the communication network may be a wifi hotspot network, a wifi direct network, a bluetooth network, a zigbee network, or a Near Field Communication (NFC) network, for example. When the communication network is a wide area network, the communication network may be, for example, a third generation mobile communication technology (3rd-generation wireless telephone technology, 3G) network, a fourth generation mobile communication technology (4G) network, a fifth generation mobile communication technology (5th-generation mobile communication technology, 5G) network, a Public Land Mobile Network (PLMN) for future evolution, the internet, or the like.

Specifically, the device 200 may automatically encode an interface of the application by using the interface generation method, and a developer may compile and package the interface on the device 200 to generate a software package. The terminal device 100 acquires the software package from the device 200 and installs the software package locally, and when the application runs on the terminal device 100, the display screen of the terminal device 100 displays the interface generated according to the interface generation method.

In some embodiments of the present application, the device 200 may be a server with a compiling function or a cloud server. Fig. 3 is a block diagram illustrating a partial structure of an apparatus 200 according to various embodiments of the present application. As shown in fig. 3, the apparatus 200 may include: a processor 201, a memory 202, and a transceiver 203. Wherein the one or more computer programs are stored in the memory 202 and configured to be executed by the one or more processors 201.

The processor 201, which may be a Central Processing Unit (CPU), a digital processing unit, or the like, is a control center of the apparatus 200, connects various parts of the entire apparatus using various interfaces and lines, and performs various functions and data processing of the apparatus 200 by running or executing a computer program stored in the memory 202 and calling data stored in the memory 202.

The memory 202 is configured to store a computer program to be executed, and if the apparatus 200 is a cloud server, the memory 202 further stores a compilation result compiled by an operating system obtained from a server having a compilation function, where the compilation result includes an application software package of an application program. In addition, if the device 200 is a server having a compiling function, the memory 202 stores an operating system source code and a compiling result generated by compiling the operating system source code. Wherein the compilation result comprises an application package for the application.

A transceiver 203 for transmitting the software package generated by the processor 201 to the terminal device 100.

The specific connection medium between the processor 201 and the memory 202 is not limited in the embodiments of the present application. In the embodiment of the present application, the memory 202, the processor 201, and the transceiver 203 are connected by the bus 204 in fig. 3 as an example, the bus is represented by a thick line in fig. 3, and the connection manner between other components is merely schematically illustrated and is not limited thereto. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 3, but this does not mean only one bus or one type of bus.

The memory 202 may be a volatile memory (volatile memory), such as a random-access memory (RAM); the memory 202 may also be a non-volatile memory (non-volatile memory) such as a read-only memory (rom), a flash memory (flash memory), a hard disk (HDD) or a solid-state drive (SSD), or the memory 202 may be any other medium that can be used to carry or store program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to this. The memory 202 may also be a combination of the above.

in some embodiments of the present application, the terminal device 100 may be a portable device, such as a mobile phone, a tablet computer, a wearable device (e.g., a smart watch) with wireless communication function, and the like. Portable devices include, but are not limited to, a dock Or other operating system. It should also be understood that in other embodiments of the present application, the device may not be a portable device, but may be a desktop computer.

Fig. 4 shows a schematic structural diagram of the terminal device 100.

The terminal device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a USB interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 151, a wireless communication module 152, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display 194, a SIM card interface 195, and the like. The sensor module 180 may include a gyroscope sensor 180A, an acceleration sensor 180B, a proximity light sensor 180G, a fingerprint sensor 180H, a touch sensor 180K, and a rotation axis sensor 180M (of course, the terminal device 100 may further include other sensors, such as a temperature sensor, a distance sensor, an ambient light sensor, an air pressure sensor, a bone conduction sensor, and the like, which are not shown in the figure).

It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the terminal device 100. In other embodiments of the present application, terminal device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.

processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a Neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors. The controller may be a neural center and a command center of the terminal device 100, among others. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.

A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system. In the present embodiment, the processor 110 may run the software package obtained from the device 200.

the display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the terminal device 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.

the cameras 193 (front camera or rear camera, or one camera may be both front camera and rear camera) are used to capture still images or video. In general, the camera 193 may include a photosensitive element such as a lens group including a plurality of lenses (convex lenses or concave lenses) for collecting an optical signal reflected by an object to be photographed and transferring the collected optical signal to an image sensor, and an image sensor. The image sensor generates an original image of an object to be photographed from the optical signal.

the internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the terminal device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. Wherein the storage program area may store an operating system, codes of application programs (such as a camera application, a WeChat application, etc.), and the like. The storage data area may store data created during use of the terminal device 100 (e.g., images, videos, etc. captured by a camera application), etc.

in addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.

The function of the sensor module 180 is described below.

the gyro sensor 180A (or simply, a gyroscope), which is a main component of the IMU, may be used to determine the motion attitude of the terminal device 100. In some embodiments, the angular velocity of terminal device 100 about three axes (i.e., x, y, and z axes) may be determined by gyroscope sensor 180A.

The acceleration sensor 180B (or simply, accelerometer), which is a main component of the IMU, may be used to detect the magnitude of acceleration of the terminal device 100 in various directions (generally, three axes). That is, the acceleration sensor 180B may be used to detect the current motion state of the terminal device 100, such as shaking or standing still.

The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The terminal device emits infrared light to the outside through the light emitting diode. The terminal device detects infrared reflected light from a nearby object using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the terminal device. When insufficient reflected light is detected, the terminal device may determine that there is no object near the terminal device.

The gyro sensor 180A (or the acceleration sensor 180B) may transmit the detected motion state information (such as an angular velocity) to the processor 110. The processor 110 determines whether the terminal device 100 is currently in the holding state or the tripod state (for example, when the angular velocity is not 0, the terminal device 100 is in the holding state) based on the motion state information.

The fingerprint sensor 180H is used to collect a fingerprint. The terminal device 100 can utilize the collected fingerprint characteristics to realize fingerprint unlocking, access to an application lock, fingerprint photographing, fingerprint incoming call answering and the like.

The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on the surface of the terminal device 100, different from the position of the display screen 194.

Illustratively, the display screen 194 of the terminal device 100 displays a main interface including icons of a plurality of applications (such as a camera application, a WeChat application, etc.). The user clicks an icon of the sweeping robot application in the main interface through the touch sensor 180K, the processor 110 is triggered to start the sweeping robot application, and the operation map is opened. The display screen 194 displays an interface of the sweeping robot application, such as a job map interface.

The wireless communication function of the terminal device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 151, the wireless communication module 152, a modem processor, a baseband processor, and the like.

The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in terminal device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.

the mobile communication module 151 may provide a solution including 2G/3G/4G/5G wireless communication and the like applied to the terminal device 100. The mobile communication module 151 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 151 may receive electromagnetic waves from the antenna 1, filter, amplify, etc. the received electromagnetic waves, and transmit the electromagnetic waves to the modem processor for demodulation. The mobile communication module 151 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 151 may be provided in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 151 may be disposed in the same device as at least some of the modules of the processor 110.

The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 151 or other functional modules, independent of the processor 110.

The wireless communication module 152 may provide a solution for wireless communication applied to the terminal device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 152 may be one or more devices integrating at least one communication processing module. The wireless communication module 152 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 152 may also receive a signal to be transmitted from the processor 110, frequency-modulate it, amplify it, and convert it into electromagnetic waves via the antenna 2 to radiate it.

In addition, the terminal device 100 may implement an audio function through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playing, recording, etc. The terminal device 100 may receive a key 190 input, and generate a key signal input related to user setting and function control of the terminal device 100. The terminal device 100 may generate a vibration alert (such as an incoming call vibration alert) using the motor 191. The indicator 192 in the terminal device 100 may be an indicator light, and may be used to indicate a charging state, a power change, or a message, a missed call, a notification, or the like. The SIM card interface 195 in the terminal device 100 is used to connect a SIM card. The SIM card can be brought into and out of contact with the terminal device 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195.

It should be understood that in practical applications, the terminal device 100 may include more or less components than those shown in fig. 4, and the embodiment of the present application is not limited thereto.

As shown in fig. 5, a schematic flowchart of an interface generating method provided in the embodiment of the present application may be executed by the device 200, and more specifically, as shown in fig. 3, a processor 201 in the device 200 may call a computer program stored in a memory 202 to be executed in conjunction with a transceiver 203. The method comprises the following specific steps.

In step 501, the processor 201 in the device 200 may obtain, through the transceiver 203, the visual elements of the reference interface and obtain the configuration information of the display screen of the target terminal device.

the visual elements of the reference interface of the application program can be from the target terminal device or from other terminal devices. Visual elements refer to interface display objects of a reference interface. The configuration information of the display screen of the target terminal device includes a screen size, a screen resolution, whether a touch function is supported, whether full-screen display is performed, and the like.

At step 502, the processor 201 in the device 200 determines the visual focus of the visual element according to the attribute information of the visual element.

specifically, the attribute information of the visual element refers to information such as a size, a center point, a resolution, and a color of the visual element. The memory 202 in the device 200 may store a visual focus library in advance, where a correspondence relationship between the image recognition result and the visual focus, a correspondence relationship between the color of the visual element and the visual focus, and a correspondence relationship between the image type of the visual element and the visual focus are stored. Further, the processor 201 in the device 200 may match at least one of the obtained image recognition result of the visual element, the color of the visual element, and the image type of the visual element with the information in the visual focus library stored in the memory 202 to determine at least one reference visual focus. Finally, the processor 201 in the device aggregates the reference visual focuses, and calculates the position with the highest probability as the visual focus of the obtained visual element.

In step 503, the processor 201 in the device 200 determines an interface layout template corresponding to the configuration information according to the acquired configuration information of the display screen.

Specifically, different display screens correspond to different interface layout templates. The interface layout template comprises at least one of the following contents: the method comprises the steps of an interface display frame, default layout parameters and a control response mode. The interface display frame comprises the position, the size, the arrangement mode and the like of the object to be displayed. The object to be displayed may be an image resource, a control, and the like.

In a possible embodiment, the memory 202 in the device 200 may store an arrangement mode template, a position template, an alignment mode template, and a control position template corresponding to different configuration information in advance, and the processor 201 in the device 200 aggregates these types of templates to finally generate an interface layout template of the interface to be generated. As shown in fig. 6A, the arrangement templates may have three arrangements, namely, a horizontal arrangement (shown as a in fig. 6A), a vertical arrangement (shown as b in fig. 6A), and an overlapping arrangement (shown as c in fig. 6A); the position template may be, as shown in fig. 6B, in the spatial coordinate system, the interface is divided into a plurality of regions, which are, respectively, a title bar, an operation bar, a menu bar, a bottom operation, a content region, and the like; as shown in fig. 6C, the alignment mode template may include a straight alignment (as shown by a in fig. 6C), a curved alignment (as shown by b in fig. 6C), and an overlapping mode alignment (as shown by C in fig. 6C). As shown in FIG. 6D, the control position template can have both a rectangular frame distribution (shown as a in FIG. 6D) and a circular frame distribution (shown as b in FIG. 6D).

in step 504, the processor 201 in the device 200 adjusts the layout of the visual elements obtained in step 501 in the interface to be generated according to the visual focus obtained in step 502 and the interface layout template obtained in step 503, and generates the interface.

Specifically, the processor 201 in the device 200 may adjust the position of the visual element acquired in the step 501 in the interface to be generated and adjust the position of the control in the interface to be generated according to the interface layout template obtained in the step 503. Then, the processor 201 in the device 200 uses the visual focus obtained in the above step 502 as a central point of the interface to be generated, and performs cropping and/or scaling on the visual element obtained in the above step 501, and adjusts the size of the control.

In summary, the interface generation method provided by the embodiment of the application can reduce the development difficulty of the application interface, improve the development efficiency, and reduce the development cost.

In one possible embodiment, the visual focus may be user-defined or determined by image recognition, such as a human face, a clear object, etc. in the interface. In another possible embodiment, the visual focus may also be obtained by fusion of visual focus algorithms. Specifically, the device 200 may determine a center point of a reference interface composed of visual elements as a first reference visual focus; performing image content recognition on the visual elements, and determining a second reference visual focus corresponding to the image content recognition result; determining a third reference visual focus corresponding to at least one of color information and picture type of the visual element; the device 200 then aggregates the first, second, and third reference visual foci to obtain the visual focus of the visual element.

In a possible embodiment, if the display mode of the interface to be generated is the split-screen mode and the target terminal device supports the split-screen function, the device 200 may respectively obtain a first visual element of the reference interface of the first application and a second visual element of the reference interface of the second application, then determine a first visual focus of the first interface to be generated of the first application according to attribute information of the first visual element, and determine a second visual focus of the second interface to be generated of the second application according to attribute information of the second visual element. The device 200 determines an interface layout template of an interface to be generated in the split screen mode according to the configuration information of the display screen of the target terminal device and the split screen mode, and finally adjusts the layout of the first visual element in the first interface to be generated according to the interface layout template of the interface to be generated in the split screen mode and the first visual focus to generate a first application interface; and adjusting the layout of the second visual elements in the second interface to be generated according to the interface layout template of the interface to be generated in the split screen mode and the second visual focus, and generating a second corresponding interface.

The following describes an interface generation method provided by the embodiment of the present application with reference to the accompanying drawings and specific application scenarios. Assume that the memory 202 in the device 200 stores a visual element of a reference interface as shown in fig. 7 in advance, where the reference interface is a reference interface in a magazine lock screen application, and the visual element of the reference interface is an image. Generally, the visual element is large in size and high in resolution so as to be adapted to other terminal devices.

Scene one

The device 200 obtains configuration parameters of the display of the smartphone 10 of fig. 1, such as aspect ratio, resolution, etc. of the display of the smartphone 10. The device 200 then obtains an interface layout template corresponding to the configuration parameters. Specifically, the device 200 may obtain an arrangement template, a position template, an alignment template, a control position template, and the like corresponding to the configuration parameter, and finally generate the interface layout template shown in fig. 8 a. The device determines the visual focus of the visual element (the visual focus is as indicated by the arrow in fig. 7) based on the color of the visual element, the picture type, and the image recognition result shown in fig. 7. Then, the device 200 uses the visual focus as a central point of the interface to be generated, cuts and scales the visual elements, adjusts the layout of the visual elements in the interface to be generated according to the interface layout template, and finally generates the interface shown in fig. 8 b.

Scene two

The device 200 obtains configuration parameters of the display of the smart watch 20 in fig. 9a, such as aspect ratio, resolution, etc. of the display of the smart watch 20. The device 200 then obtains an interface layout template corresponding to the configuration parameters. Specifically, the device 200 may obtain an arrangement template, a position template, an alignment template, a control position template, and the like corresponding to the configuration parameter, and finally generate the interface layout template shown in fig. 9 b. The device determines the visual focus of the visual element (the visual focus is as indicated by the arrow in fig. 7) based on the color of the visual element, the picture type, and the image recognition result shown in fig. 7. Then, the device uses the visual focus as the center point of the interface to be generated, cuts and scales the visual elements, adjusts the layout of the visual elements in the interface to be generated according to the interface layout template shown in fig. 9b, and finally generates the interface shown in fig. 9 c.

Scene three

the device 200 obtains configuration parameters of the display of the smart watch 20 in fig. 10a, such as aspect ratio, resolution, etc. of the display of the smart watch 20. The device 200 then obtains an interface layout template corresponding to the configuration parameters. Specifically, the device 200 may obtain an arrangement template, a position template, an alignment template, a control position template, and the like corresponding to the configuration parameter, and finally generate the interface layout template shown in fig. 10 b. The device 200 determines a visual focus of the visual element (the visual focus is as indicated by the arrow in fig. 7) based on the color of the visual element, the picture type, and the image recognition result shown in fig. 7. Then, the device 200 uses the visual focus as a center point of the interface to be generated, cuts and scales the visual element, adjusts the layout of the visual element in the interface to be generated according to the interface layout template shown in fig. 10b, and finally generates the interface shown in fig. 10 c.

scene four

The device 200 obtains configuration parameters of the display screen of the tablet computer 30 in fig. 11a, such as aspect ratio, resolution, etc. of the display screen of the tablet computer 30. The device 200 then obtains an interface layout template corresponding to the configuration parameters. Specifically, the device 200 may obtain an arrangement template, a position template, an alignment template, a control position template, and the like corresponding to the configuration parameter, and finally generate the interface layout template shown in fig. 11 b. The device determines the visual focus of the visual element (the visual focus is as indicated by the arrow in fig. 7) based on the color of the visual element, the picture type, and the image recognition result shown in fig. 7. Then, the device 200 uses the visual focus as a center point of the interface to be generated, cuts and scales the visual element, adjusts the layout of the visual element in the interface to be generated according to the interface layout template shown in fig. 11b, and finally generates the interface shown in fig. 11 c.

Scene five

Device 200 obtains configuration parameters of the display screen of smart tv 40 in fig. 12a, such as aspect ratio, resolution, etc. of the display screen of smart tv 40. The device 200 then obtains an interface layout template corresponding to the configuration parameters. Specifically, the device 200 may obtain an arrangement template, a position template, an alignment template, a control position template, and the like corresponding to the configuration parameter, and finally generate the interface layout template shown in fig. 12 b. The device 200 determines a visual focus of the visual element (the visual focus is as indicated by the arrow in fig. 7) based on the color of the visual element, the picture type, and the image recognition result shown in fig. 7. Then, the device 200 uses the visual focus as the center point of the interface to be generated, cuts and scales the visual element, adjusts the layout of the visual element in the interface to be generated according to the interface layout template shown in fig. 12b, and finally generates the interface shown in fig. 12 c.

scene six

Assuming that the smartphone 10 supports folding, for example, the smartphone 10 may be unfolded from a first angle to a second angle. By angle is meant the angle between two screens of a foldable touch screen. When the interface to be generated is the interface in the fully unfolded state shown in fig. 13c, the device 200 obtains configuration parameters of the display screen of the smartphone 10 in fig. 13a, such as an aspect ratio, a resolution, a folding angle, and the like of the display screen of the smartphone 10. The device 200 then obtains an interface layout template corresponding to the configuration parameters. Specifically, the device 200 may obtain an arrangement template, a position template, an alignment template, a control position template, and the like corresponding to the configuration parameter, and finally generate the interface layout template shown in fig. 13 b.

the device 200 acquires the color, picture type and image recognition result of the visual element shown in fig. 7 to determine the visual focus of the visual element (the visual focus is shown as the arrow position in fig. 7). Then, the device 200 uses the visual focus as a center point of the interface to be generated, cuts and scales the visual element, adjusts the layout of the visual element in the interface to be generated according to the interface layout template shown in fig. 13b, and finally generates the interface shown in fig. 13 c.

Scene seven

Assuming that the smartphone 10 supports split-screen, for example, the smartphone 10 is split-screen, after split-screen, the upper half is an interface of the gallery application, and the lower half is an interface of the WeChat application. When the interface to be generated is the interface in the split screen state shown in fig. 14c, the device 200 obtains configuration parameters of the display screen of the smartphone 10 in fig. 14a, such as an aspect ratio, a resolution, a split screen state, and the like of the display screen of the smartphone 10. The device 200 then obtains an interface layout template corresponding to the configuration parameters. Specifically, the device 200 may obtain an arrangement template, a position template, an alignment template, a control position template, and the like corresponding to the configuration parameter, and finally generate the interface layout template shown in fig. 14 b.

the device 200 acquires the color, picture type and image recognition result of the visual element shown in fig. 7 to determine the visual focus of the visual element (the visual focus is shown as the arrow position in fig. 7). Then, the device 200 uses the visual focus as the center point of the interface to be generated, cuts and scales the visual element, adjusts the layout of the visual element in the interface to be generated according to the interface layout template shown in fig. 14b, and finally generates the interface shown in fig. 14 c.

For another example, the magazine lock screen interface of the smart tv is shown in fig. 15 a. The device 200 acquires the visual elements in the reference interface shown in fig. 15a, generates a magazine lock screen interface 1501 of the smartphone as shown in fig. 15b and a magazine lock screen interface 1502 of the smartwatch as shown in fig. 15b using the visual elements according to the above-described interface generation method.

In other embodiments of the present application, an apparatus having a development function is disclosed in embodiments of the present application, and as shown in fig. 16, the apparatus may include: a touch screen 1601, wherein the touch screen 1601 comprises a touch panel 1606 and a display 1607; one or more processors 1602; a memory 1603; one or more application programs (not shown) and one or more computer programs 1604, which may be connected via one or more communication buses 1605. Wherein the one or more computer programs 1604 are stored in the memory 1603 and configured to be executed by the one or more processors 1602, the one or more computer programs 1604 comprise instructions which may be used to perform the steps as in the respective embodiment of fig. 5.

The embodiment of the present application further provides a computer storage medium, where a computer instruction is stored in the computer storage medium, and when the computer instruction runs on a device, the device executes the relevant method steps to implement the interface generation method in the embodiment.

The embodiment of the present application further provides a computer program product, which when running on a computer, causes the computer to execute the above related steps, so as to implement the interface generation method in the above embodiment.

In addition, embodiments of the present application also provide an apparatus, which may be specifically a chip, a component or a module, and may include a processor and a memory connected to each other; the memory is used for storing computer execution instructions, and when the device runs, the processor can execute the computer execution instructions stored in the memory, so that the chip can execute the interface generation method in the above method embodiments.

The device, the computer storage medium, the computer program product, or the chip provided in the embodiments of the present application are all configured to execute the corresponding method provided above, and therefore, the beneficial effects achieved by the device, the computer storage medium, the computer program product, or the chip may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.

Through the description of the above embodiments, those skilled in the art will understand that, for convenience and simplicity of description, only the division of the above functional modules is used as an example, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.

in the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, a module or a unit may be divided into only one logic function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another apparatus, or some features may be omitted, or not executed. In addition, the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

Units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed to a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.

the above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

33页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:桌面图标的显示控制方法、装置、可读介质及计算设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类