Screen element control method, device, equipment and storage medium

文档序号:1686773 发布日期:2020-01-03 浏览:5次 中文

阅读说明:本技术 屏幕元素控制方法、装置、设备、存储介质 (Screen element control method, device, equipment and storage medium ) 是由 李兆轩 于 2019-10-09 设计创作,主要内容包括:本公开提供了一种屏幕元素控制方法、装置、设备、存储介质。屏幕元素控制方法应用于具有触摸屏的终端设备在进行视频播放时控制遮挡所述视频画面的所述屏幕元素,包括:检测用户触摸所述触摸屏的触控类型;识别用户在所述触控类型下触摸所述触摸屏的触摸动作;根据所述触摸动作控制当前显示的屏幕元素变化,使得所述屏幕元素遮挡的所述视频画面清晰显示或无遮挡显示;检测所述触摸动作的结束动作,根据所述结束动作确定所述屏幕元素变化的最终状态;根据所述屏慕元素变化的最终状态在所述视频画面上显示所述屏慕元素。本公开可以实现根据用户手势控制屏幕中元素的变化,提升了用户的使用体验。(The disclosure provides a screen element control method, a device, equipment and a storage medium. The screen element control method is applied to the screen element for controlling and shielding the video picture when the terminal equipment with the touch screen plays the video, and comprises the following steps: detecting a touch type of a user touching the touch screen; identifying a touch action of a user touching the touch screen under the touch type; controlling the currently displayed screen element to change according to the touch action, so that the video picture shielded by the screen element is displayed clearly or in a non-shielded manner; detecting an ending action of the touch action, and determining the final state of the screen element change according to the ending action; and displaying the screen mu element on the video picture according to the final state of the screen mu element change. The method and the device can realize the control of the change of the elements in the screen according to the gestures of the user, and improve the use experience of the user.)

1. A screen element control method is applied to a terminal device with a touch screen to control a screen element for shielding a video picture during video playing, and comprises the following steps:

detecting a touch type of a user touching the touch screen;

identifying a touch action of a user touching the touch screen under the touch type;

controlling the currently displayed screen element change according to the touch action, so that the video picture shielded by the screen element is displayed clearly or in a non-shielding manner, wherein the screen element change manner at least comprises one of transparency change and/or position change;

detecting an ending action of the touch action, and determining the final state of the screen element change according to the ending action;

and displaying the screen mu element on the video picture according to the final state of the screen mu element change.

2. The screen element control method of claim 1,

the ending action at least comprises one of the conditions of finger lifting, gesture pause and touch type change.

3. The screen element control method of claim 1,

the final state of the screen element change includes at least one of maintaining a last changed state or restoring an initial state.

4. The screen element control method of claim 1,

a change in transparency of the screen element changes comprises a decrease or an increase in transparency;

the touch action comprises a zooming action, the zooming-in action of the zooming action corresponds to one condition that the transparency is reduced or improved, and the zooming-out action of the zooming action corresponds to the other condition that the transparency is reduced or improved.

5. The screen element control method of claim 1,

the position change in the screen element change comprises moving out of the screen or moving back to the screen;

the touch action comprises a zooming action, the zooming-in action of the zooming action corresponds to one condition that the position moves out of the screen or moves back to the screen, and the zooming-out action of the zooming action corresponds to the other condition that the position moves out of the screen or moves back to the screen.

6. A screen element control device is applied to a terminal device with a touch screen to control the screen element for shielding a video picture during video playing, and comprises:

the touch detection module is used for detecting the touch type of the touch screen touched by a user;

the action recognition module recognizes a touch action of a user touching the touch screen under the touch type and detects an ending action of the touch action;

the control change module is used for controlling the change of the currently displayed screen element according to the touch action, so that the video picture shielded by the screen element is displayed clearly or in a non-shielding manner, and determining the final state of the change of the screen element according to the ending action, wherein the change manner of the screen element at least comprises one of transparency change and/or position change;

and the display module displays the screen mu elements on the video picture according to the final state of the change of the screen mu elements.

7. The screen element control apparatus of claim 6,

the touch type at least comprises one of single-point touch, two-point touch, three-point touch, four-point touch and five-point touch;

the touch action at least comprises one of zooming action, sliding action and clicking action;

the ending action at least comprises one of the conditions of lifting a finger, pausing a gesture and changing the touch type;

the final state of the screen element change includes at least one of maintaining a last changed state or restoring an initial state.

8. A computer device comprising a memory having stored therein a computer program and a processor implementing the screen element control method of any one of claims 1-5 when executing the computer program.

9. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, implements the screen element control method according to any one of claims 1-5.

Technical Field

The present disclosure relates to the field of computer software technologies, and in particular, to a method, an apparatus, a device, and a storage medium for controlling a screen element.

Background

With the development of multimedia technology, people are increasingly unable to leave various intelligent devices. The intelligent device includes various devices and terminals, including a system terminal for processing and controlling information using computer technology and digital communication network technology, etc. Currently, smart devices with touch screens, such as mobile phones, are widely used.

On some smart devices with touch screens, people often watch short videos or live videos, wherein many interactive function buttons, commercialized elements and the like are often overlaid on the played content, which may block some key points of the played content, thereby causing troubles to users.

At present, the following problems mainly exist: when the screen element control is used, user experience is poor, all or most of interactive function buttons on a screen are often cleared, and the user cannot realize interaction.

Disclosure of Invention

The present disclosure has been made to solve the above problems, and an object of the present disclosure is to provide a simple and efficient screen element control method, device, apparatus, and storage medium that can control a change of an element in a screen by a gesture. This disclosure provides this summary in order to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.

In order to solve the above technical problem, an embodiment of the present disclosure provides a screen element control method, which adopts the following technical solutions:

the screen element control method is applied to the screen element for controlling and shielding the video picture when the terminal equipment with the touch screen plays the video, and comprises the following steps:

detecting a touch type of a user touching the touch screen;

identifying a touch action of a user touching the touch screen under the touch type;

controlling the currently displayed screen element change according to the touch action, so that the video picture shielded by the screen element is displayed clearly or in a non-shielding manner, wherein the screen element change manner at least comprises one of transparency change and/or position change;

detecting an ending action of the touch action, and determining the final state of the screen element change according to the ending action;

and displaying the screen mu element on the video picture according to the final state of the screen mu element change.

In order to solve the above technical problem, an embodiment of the present disclosure further provides a screen element control device, where the screen element control device is applied to a terminal device with a touch screen, and controls a screen element that blocks a video picture when playing a video, and the following technical solutions are adopted:

the touch detection module is used for detecting the touch type of the touch screen touched by a user;

the action recognition module recognizes a touch action of a user touching the touch screen under the touch type and detects an ending action of the touch action;

the control change module is used for controlling the change of the currently displayed screen element according to the touch action, so that the video picture shielded by the screen element is displayed clearly or in a non-shielding manner, and determining the final state of the change of the screen element according to the ending action, wherein the change manner of the screen element at least comprises one of transparency change and/or position change;

and the display module displays the screen mu elements on the video picture according to the final state of the change of the screen mu elements.

In order to solve the above technical problem, an embodiment of the present disclosure further provides a computer device, which adopts the following technical solutions:

comprising a memory in which a computer program is stored and a processor which, when executing said computer program, implements a screen element control method as described in the foregoing.

In order to solve the above technical problem, an embodiment of the present disclosure further provides a computer-readable storage medium, which adopts the following technical solutions:

the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements a screen element control method as described above.

According to the technical scheme disclosed by the disclosure, compared with the prior art, the screen element can be controlled to change through the gesture of the user, at least one of the transparency change and the position change of the screen element is adopted, so that the video picture shielded by the screen element is gradually displayed clearly or displayed without shielding, and the final state of the change of the screen element can be controlled to be the recovery initial state by the user, so that the user can see the originally shielded video content, interactive buttons, commercial elements and the like during playing can be saved, and the use experience of the user is improved.

Drawings

FIG. 1 is an exemplary system architecture diagram in which the present disclosure may be applied;

FIG. 2 is a flow diagram for one embodiment of a screen element control method according to the present disclosure;

FIG. 3 is a schematic diagram of one embodiment of a screen element control apparatus according to the present disclosure;

FIG. 4 is a schematic block diagram of one embodiment of a computer device according to the present disclosure.

The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.

Detailed Description

Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure; the terms "including" and "having," and any variations thereof, in the description and claims of this disclosure and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of the present disclosure or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.

Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the disclosure. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.

In order to make the technical solutions of the present disclosure better understood by those skilled in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.

[ System Structure ]

First, the structure of the system of one embodiment of the present disclosure is explained. As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, 104, a network 105, and a server 106. The network 105 serves as a medium for providing communication links between the terminal devices 101, 102, 103, 104 and the server 106.

In the present embodiment, an electronic device (e.g., the terminal device 101, 102, 103, or 104 shown in fig. 1) on which the screen element control method operates can perform transmission of various information through the network 105. Network 105 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few. It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G/5G connection, a Wi-Fi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a UWB connection, and other now known or later developed wireless connection means.

A user may use terminal devices 101, 102, 103, 104 to interact with a server 106 via a network 105 to receive or send messages or the like. Various client applications, such as a video live and play application, a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, etc., may be installed on the terminal device 101, 102, 103, or 104.

The terminal device 101, 102, 103 or 104 may be various electronic devices having a touch screen display and/or supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (moving picture experts group compressed standard audio layer 3), MP4 (moving picture experts group compressed standard audio layer 4) players, head mounted display devices, laptop portable computers, desktop computers, and the like.

The server 106 may be a server that provides various services, such as a background server that provides support for pages displayed on the terminal devices 101, 102, 103, or 104.

It should be noted that the screen element control method provided by the embodiment of the present disclosure is generally executed by the server together with the terminal device 101, 102, 103, or 104.

It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.

Here, the terminal device may implement the embodiment method of the present disclosure independently or by running an application in an android system in cooperation with other electronic terminal devices, or may run an application in other operating systems, such as an iOS system, a Windows system, a hongmeng system, and the like, to implement the embodiment method of the present disclosure.

[ Screen element control method ]

Referring to FIG. 2, a flow diagram for one embodiment of a screen element control method according to the present disclosure is shown. The screen element control method is applied to the screen element for controlling and shielding the video picture when the terminal equipment with the touch screen plays the video, and comprises the following steps:

and S21, detecting the touch type of the user touching the touch screen of the terminal equipment. The touch type may include single-point touch, two-point touch, three-point touch, four-point touch, five-point touch, and the like, and the number of touch points is not limited.

Here, a touch event detector is further added to the screen element layout including the content elements and the screen elements before detecting the touch type, and when detecting, first, the event parameters of the touch event detector are acquired, the type of the finger pressing event and the touch is detected, the touch type of the multi-touch event is detected, and after determining the touch type, the next step S22 is performed.

And S22, recognizing the touch action of the user touching the touch screen of the terminal equipment under the touch type. Here, the touch action may include a zoom action, a slide action, a click action, and the like. The sliding motion may include sliding motions such as upward sliding, downward sliding, leftward sliding, and rightward sliding, and the direction and shape of the sliding motion are not limited, and may be sliding in a direction such as upward leftward or downward rightward, or may be sliding in a specific shape such as a circle, square, triangle, or specific letter shape. The click action may include a double click action, a single click action, a short click action, a long click action, and the like, and the manner of the click action is not limited.

Here, through combining different touch types and different touch actions, more choices can be provided for the user, and the user can select the control gesture meeting own habits, so that the user experience is improved.

And S23, controlling the change of the currently displayed screen element according to the touch action, so that the video picture blocked by the screen element is displayed clearly or in a non-blocked manner. Here, the screen element change manner may include a transparency change and/or a position change.

The transparency change of the screen element may be any transparency between completely transparent and completely opaque, the transparency change mode may be an infinite change or a fixed transparency jump change, and the transparency change may be a transparency change of the screen element in a certain area, for example, one or more areas of a left area, a right area, an upper area, and a lower area of the screen, or a transparency change of the screen element in a circular area with a certain radius centering on a touch point or in an area with other shapes, or a transparency change of all the screen elements. The mode of changing the transparency of the screen elements can realize that the positions of the screen elements are not changed, the element layout of the current video picture is not changed, and the influence on the video playing watched by a user is reduced.

The position change of the screen element includes moving out of the screen or moving back to the screen, where the position change of the screen element may be that each screen element moves to the edge of the screen closest to the screen, or that each screen element moves to the center of the screen in a gathering manner, or that all the screen elements move to one side edge or to one direction or to one position, and the specific moving direction and moving position are not limited. The position of the screen element can be changed, and the interactivity of video playing can be increased by increasing animation effect and the like when the position of the screen element is changed, so that the interestingness of a user when watching the video playing is improved.

Here, the transparency change and/or the position change of the screen element may correspond to a combination of the above-described various touch types and touch actions, and is not particularly limited. For example, the transparency change of the screen element can be realized by a zooming action under a two-point touch type, for example, the transparency of the screen element is improved by the zooming action, and the transparency of the screen element is reduced by the zooming action; or the position of the screen element is changed by a zooming action under the three-point touch type, for example, a zooming action of the zooming action moves out of the screen corresponding to the position of the screen element, and a zooming action of the zooming action moves back to the screen corresponding to the position of the screen element; of course, the position of the screen element may be moved to one side by sliding to one side in a five-point touch type, and the combination of the various implementation manners and the implementation effects is not limited. Several implementations and implementations of the present disclosure are described in detail below.

And S24, detecting the ending action of the touch action, and determining the final state of the screen element change according to the ending action. Here, the detected ending action may be an event such as a user's finger lifting, a gesture pause, a change in touch type change, and the like, and if a preset condition is met, the ending stage is entered. Here, the final state of the screen element change may be either the last changed state or the restored initial state, and is not particularly limited. The final state of the screen element change can be determined by the user according to own habits, and the screen element can be adjusted to any other display state or a normal display state.

S25, displaying the screen mu element on the video picture according to the final state of the screen mu element change, so as to finish controlling the change of the screen element for shielding the video picture.

According to the embodiment, when the user watches video playing, the state of the screen element is controlled to include the change of transparency or position, so that the screen element which shields the video picture playing content is eliminated or changed, and the video watching experience of the user is improved.

It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.

Example method 1

One embodiment of the present disclosure is a method for implementing a change in transparency of a screen element by a zoom gesture in a two-point touch type, comprising the steps of:

step 1: adding a touch event detector to the layout of the screen elements including the content elements and the screen elements, firstly acquiring event parameters of the touch event detector during detection, detecting types of a finger pressing event and touch, and detecting whether the touch type of a multi-point touch event is a two-point touch type.

Step 2: if the two-point touch type is detected in the step 1, marking the zooming gesture to enter a starting stage, and recording the distance X between the two points as an initial distance. After the zooming gesture enters the starting stage, the finger movement event and the touch type are monitored, whether the gesture action process is a two-point touch type or not is identified, and the distance change Y between two points is monitored. If the Y start value is less than X, a zoom-out gesture is deemed to occur, and if the Y start value is greater than X, a zoom-in gesture is deemed to occur. Here, for example, it is considered that 0.1 enlargement or reduction ratio Z is generated for each 10 pixel point differences, where Z is Y-X. If Z <0, Z/10 [ reduction ratios ] are generated, and if Z >0, Z/10 [ enlargement ratios ] are generated.

And step 3: for example, in the initial stage, the screen element transparency is 1 (opaque state), and when each 0.1[ magnification ratio ] of the outward throwing of the zooming gesture is detected, the screen element transparency is set to-0.1, and the screen element transparency is gradually transparent until the element transparency is 0, and the screen element enters a completely invisible state. When the zooming gesture is detected to throw out 0.1[ zoom-out ratio ] every time, setting the transparency +0.1 of the screen element, gradually enabling the screen element to be opaque until the transparency of the element is 1, and entering a completely opaque state.

And 4, step 4: and detecting a finger lifting event, marking a zooming gesture to enter an end stage when detecting that the number of touch points of the current screen is not equal to 2, and changing the transparency of the screen element into 1 and entering a completely opaque state after detecting that the zooming gesture throws out the [ gesture end ] event outwards.

And 5, displaying the screen element on the video picture in a completely opaque state of the screen element.

In the above method, the transparency of the screen element is changed by a zooming gesture in a two-point touch type, where the zooming-in action of the zooming gesture corresponds to the improvement of the transparency of the screen element, and the zooming-out action of the zooming gesture corresponds to the reduction of the transparency of the screen element, but the opposite is true, that is, the zooming-in action corresponds to the reduction of the transparency of the screen element and the zooming-out action corresponds to the improvement of the transparency of the screen element, which is not limited. The method can realize the transparency change of the screen elements through the zooming gesture, the transparency change is a gradual change mode, a user can determine the proper transparency through the zooming gesture, and the user can watch the video playing under the conditions that the positions of the screen elements are not changed and the element layout of the current video picture is not changed. Of course, the touch type, the touch action, the screen element transparency change mode, the ending action mode, the ending state, and the like are not limited as described above, and are not described herein again.

Example method 2

Another embodiment of the present disclosure is a method for implementing a movement of a screen element to an edge of a screen through a zoom gesture in a two-point touch type, including the steps of:

step 1: adding a touch event detector to the layout of the screen elements including the content elements and the screen elements, firstly acquiring event parameters of the touch event detector during detection, detecting types of a finger pressing event and touch, and detecting whether the touch type of a multi-point touch event is a two-point touch type.

Step 2: if the two-point touch type is detected in the step 1, marking the zooming gesture to enter a starting stage, and recording the distance X between the two points as an initial distance. After the zooming gesture enters the starting stage, the finger movement event and the touch type are monitored, whether the gesture action process is a two-point touch type or not is identified, and the distance change Y between two points is monitored. If the Y start value is less than X, a zoom-out gesture is deemed to occur, and if the Y start value is greater than X, a zoom-in gesture is deemed to occur. Here, for example, it is considered that 0.1 enlargement or reduction ratio Z is generated for each 10 pixel point differences, where Z is Y-X. If Z <0, Z/10 [ reduction ratios ] are generated, and if Z >0, Z/10 [ enlargement ratios ] are generated.

And step 3: for example, in the initial stage, the edge of the screen closest to the screen element is firstly identified, and the shortest distance for moving in a certain direction from top to bottom, from left to right is determined, so that the screen element can be completely moved out of the screen and cannot be seen. The DIRECTION is defined as [ direct ], and the shortest distance is [ MOVE _ MIN ]. When detecting that the zooming gesture throws out one 0.1[ magnification ratio ], setting the screen element to MOVE [ MOVE _ MIN ]/10 pixels to [ DIRECTION ] DIRECTION until the moving distance reaches [ MOVE _ MIN ], completely invisible. When the zooming gesture is detected to throw out 0.1[ zoom-out ratio ] every time, the screen element is set to MOVE [ MOVE _ MIN ]/10 pixels in the DIRECTION of [ -DIRECTION ] until the moving distance reaches the original position.

And 4, step 4: and detecting a finger lifting event, marking a zooming gesture to enter an end stage when detecting that the number of touch points of the current screen is not equal to 2, and returning all screen elements to the original positions after detecting that the zooming gesture throws out the [ gesture end ] event outwards.

And 5, displaying the screen element on the video picture in a state that the screen element is completely returned to the original position.

In the above method, the screen element is moved to the edge of the screen by a zoom gesture in a two-point touch type, where a zoom-in action of the zoom gesture corresponds to the screen element moving outward, and a zoom-out action of the zoom gesture corresponds to the screen element moving inward, but the method may also be reversed, that is, the zoom-in action corresponds to the screen element moving inward, and the zoom-out action corresponds to the screen element moving outward, which is not limited. According to the method, the position change of the screen element can be realized through the zooming gesture, the position change is a gradual change mode, a user can determine a proper position through the zooming gesture, the interactivity of video playing is increased by adding an animation effect and the like when the position of the screen element changes, and the interestingness of the user in watching the video playing is improved. Of course, the touch type, the touch action, the screen element moving mode, the ending action mode, the ending state, and the like are not limited as described above, and are not described herein again.

Example method 3

Another embodiment of the present disclosure is a method for implementing a change in transparency of a screen element by a vertical sliding gesture in a two-point touch type, including the steps of:

step 1: adding a touch event detector to the layout of the screen elements including the content elements and the screen elements, firstly acquiring event parameters of the touch event detector during detection, detecting types of a finger pressing event and touch, and detecting whether the touch type of a multi-point touch event is a two-point touch type.

Step 2: if the two-point touch type is detected in the step 1, the marking zoom gesture enters a starting stage, and two-point Y coordinates Y1 and Y2 are recorded. When the zooming gesture enters the starting stage, the finger movement event and the touch type are detected, whether the gesture action is a two-point touch type or not is identified, and the changes of Y1 and Y2 in the Y coordinate axis are detected. Taking the maximum moving distance value of y1 and y2 as a standard, if the y value is reduced, an upward moving event is considered to be generated, if the y value is increased, a downward moving event is considered to be generated, and the difference value of every 10 pixel points is considered to generate 0.1[ upward moving ratio ] or [ downward moving ratio ].

And step 3: for example, in the initial stage, the transparency of the screen element is 1 (opaque state), when the gesture motion is detected, every time the gesture motion is thrown outwards by 0.1[ upward movement ratio ], the transparency of the screen element is set to be-0.1, the screen element is gradually transparent until the transparency of the element is 0, and the screen element enters a completely invisible state. When the gesture motion is detected, each time 0.1[ downward movement ratio ] is thrown outwards, the transparency +0.1 of the screen element is set, the screen element is gradually opaque until the transparency of the element is 1, and the screen element enters a completely visible state.

And 4, step 4: and detecting a finger lifting event, marking a zooming gesture to enter an end stage when detecting that the number of touch points of the current screen is not equal to 2, and changing the transparency of the screen element into 1 and entering a completely opaque state after detecting that the zooming gesture throws out the [ gesture end ] event outwards.

And 5, displaying the screen element on the video picture in a completely opaque state of the screen element.

In the above method, the transparency of the screen element is changed by an up-down sliding gesture in a two-point touch type, where an upward motion of the sliding gesture corresponds to an increase in the transparency of the screen element, and a downward motion of the sliding gesture corresponds to a decrease in the transparency of the screen element, but the above method may also be the opposite, that is, an upward motion corresponds to a decrease in the transparency of the screen element, and a downward motion corresponds to an increase in the transparency of the screen element, and is not limited.

Of course, the sliding gesture may be a left-right sliding gesture, in which two X-coordinates X1 and X2 are recorded after the marking gesture enters the beginning stage. When the sliding gesture enters the starting stage, the finger movement event and the touch type are detected, whether the two-point touch event is detected is judged, and the changes of the X1 and the X2 on the X coordinate axis are detected. Taking the maximum moving distance values in x1 and x2 as a criterion, if the x value is reduced, it is considered that a leftward movement event is generated, if the x value is increased, it is considered that a rightward movement event is generated, and each 10 pixel point difference values are considered that 0.1[ leftward movement ratio ] or [ rightward movement ratio ] is generated, and the transparency change of the screen element is realized corresponding to the [ leftward movement ratio ] or [ rightward movement ratio ], and other steps are the same as the method for realizing the transparency change of the screen element by up-and-down sliding of the two-point type, which is not described herein again.

In this embodiment, the method for implementing the transparency change of the screen element through the sliding gesture operation is more concise than the method for implementing the transparency change of the screen element through the zooming gesture, and the transparency change degree is more convenient to control.

Of course, the touch type, the touch action, the screen element moving mode, the ending action mode, the ending state, and the like are not limited as described above, and are not described herein again.

Example method 4

Another embodiment of the present disclosure is a method for implementing a screen element to move to an edge of a screen by a vertical sliding gesture in a two-point touch type, including the steps of:

step 1: adding a touch event detector to the layout of the screen elements including the content elements and the screen elements, firstly acquiring event parameters of the touch event detector during detection, detecting types of a finger pressing event and touch, and detecting whether the touch type of a multi-point touch event is a two-point touch type.

Step 2: if the two-point touch type is detected in the step 1, the marking zoom gesture enters a starting stage, and two-point Y coordinates Y1 and Y2 are recorded. When the zooming gesture enters the starting stage, the finger movement event and the touch type are detected, whether the gesture action is a two-point touch type or not is identified, and the changes of Y1 and Y2 in the Y coordinate axis are detected. Taking the maximum moving distance value of y1 and y2 as a standard, if the y value is reduced, an upward moving event is considered to be generated, if the y value is increased, a downward moving event is considered to be generated, and the difference value of every 10 pixel points is considered to generate 0.1[ upward moving ratio ] or [ downward moving ratio ].

And step 3: for example, in the initial stage, the edge of the screen closest to the screen element is firstly identified, and the shortest distance for moving in a certain direction from top to bottom, from left to right is determined, so that the screen element can be completely moved out of the screen and cannot be seen. The DIRECTION is defined as [ direct ], and the shortest distance is [ MOVE _ MIN ]. When detecting that the sliding gesture throws 0.1[ upward movement ratio ] outward, setting the screen element to MOVE [ MOVE _ MIN ]/10 pixels in the [ DIRECTION ] DIRECTION until the movement distance reaches [ MOVE _ MIN ], and completely invisible. When the sliding gesture is detected to throw out 0.1[ downward movement ratio ], the screen element is set to MOVE [ MOVE _ MIN ]/10 pixels in the DIRECTION of [ -DIRECTION ] until the movement distance reaches the original position.

And 4, step 4: and detecting a finger lifting event, marking a zooming gesture to enter an end stage when detecting that the number of touch points of the current screen is not equal to 2, and returning all screen elements to the original positions after detecting that the zooming gesture throws out the [ gesture end ] event outwards.

And 5, displaying the screen element on the video picture in a state that the screen element is completely returned to the original position.

In the above method, the screen element is moved to the edge of the screen by a vertical sliding gesture in a two-point touch type, where an upward motion of the vertical sliding gesture corresponds to the screen element moving outward, and a downward motion of the vertical sliding gesture corresponds to the screen element moving inward, but may be opposite, that is, the upward motion corresponds to the screen element moving inward, and the downward motion corresponds to the screen element moving outward, and is not limited.

Of course, the sliding gesture may be a left-right sliding gesture, in which two X-coordinates X1 and X2 are recorded after the marking gesture enters the beginning stage. When the sliding gesture enters the starting stage, the finger movement event and the touch type are detected, whether the two-point touch event is detected is judged, and the changes of the X1 and the X2 on the X coordinate axis are detected. Regarding the maximum moving distance values in x1 and x2 as a standard, if the x value is reduced, a leftward movement event is considered to be generated, if the x value is increased, a rightward movement event is considered to be generated, and each 10 pixel point difference values are considered to generate 0.1[ leftward movement ratio ] or [ rightward movement ratio ], and the screen element moving position change is realized corresponding to the [ leftward movement ratio ] or the [ rightward movement ratio ], and other steps are the same as the method for realizing the screen element moving position change by up-and-down sliding of the two-point touch type, and are not repeated here.

In this embodiment, the method for implementing the position change of the screen element through the sliding gesture operation is more concise than that through the zooming gesture, and the position change distance is more convenient to control.

Of course, the touch type, the touch action, the screen element moving mode, the ending action mode, the ending state, and the like are not limited as described above, and are not described herein again.

It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read Only Memory (ROM), or a Random Access Memory (RAM).

[ Screen element control device ]

As shown in fig. 3, in order to implement the technical solution in the embodiment of the present disclosure, the present disclosure provides a screen element control device, which may be specifically applied to a terminal device with a touch screen to control a screen element that blocks a video picture when playing a video.

The screen element control device described in this embodiment includes: a touch detection module 301, an action recognition module 302, a control change module 303 and a display module 304.

The touch detection module 301 is configured to detect a touch type of a user touching the touch screen. The touch type may include single-point touch, two-point touch, three-point touch, four-point touch, five-point touch, and the like, and the number of touch points is not limited.

And the action identification module 302 is used for identifying a touch action of a user touching the touch screen of the terminal device under the touch type and detecting an ending action of the touch action.

Here, the touch action may include a zoom action, a slide action, a click action, and the like. The sliding motion may include sliding motions such as upward sliding, downward sliding, leftward sliding, and rightward sliding, and the direction and shape of the sliding motion are not limited, and may be sliding in a direction such as upward leftward or downward rightward, or may be sliding in a specific shape such as a circle, square, triangle, or specific letter shape. The click action may include a double click action, a single click action, a short click action, a long click action, and the like, and the manner of the click action is not limited.

Here, the detected ending action may be an event such as a user's finger lifting, a gesture pause, a touch type change, and the like, and if a preset condition is met, the ending stage is entered.

And the control change module 303 is configured to control the currently displayed screen element to change according to the touch action, so that the video image blocked by the screen element is displayed clearly or without being blocked, and determine a final state of the change of the screen element according to the ending action.

Here, the screen element change manner may include a transparency change and/or a position change.

The transparency change of the screen element may be any transparency between completely transparent and completely opaque, the transparency change mode may be an infinite change or a fixed transparency jump change, and the transparency change may be a transparency change of the screen element in a certain area, for example, one or more areas of a left area, a right area, an upper area, and a lower area of the screen, or a transparency change of the screen element in a circular area with a certain radius centering on a touch point or in an area with other shapes, or a transparency change of all the screen elements.

The position change of the screen element includes moving out of the screen or moving back to the screen, where the position change of the screen element may be that each screen element moves to the edge of the screen closest to the screen, or that each screen element moves to the center of the screen in a gathering manner, or that all the screen elements move to one side edge or to one direction or to one position, and the specific moving direction and moving position are not limited.

Here, the transparency change and/or the position change of the screen element may correspond to a combination of the above-described various touch types and touch actions, and is not particularly limited. For example, the transparency change of the screen element can be realized by a zooming action under a two-point touch type, for example, the transparency of the screen element is improved by the zooming action, and the transparency of the screen element is reduced by the zooming action; or the position of the screen element is changed by a zooming action under the three-point touch type, for example, a zooming action of the zooming action moves out of the screen corresponding to the position of the screen element, and a zooming action of the zooming action moves back to the screen corresponding to the position of the screen element; of course, the position of the screen element may be moved to one side by sliding to one side in a five-point touch type, and the combination of the various implementation manners and the implementation effects is not limited.

Here, the final state of the screen element change may be either the last changed state or the restored initial state, and is not particularly limited.

And the display module 304 is configured to display the screen mu element on the video picture according to the final state of the screen mu element change, so as to end controlling the change of the screen element that blocks the video picture.

It should be understood that although each block in the block diagrams of the figures may represent a module, a portion of which comprises one or more executable instructions for implementing the specified logical function(s), the blocks are not necessarily executed sequentially. Each module and functional unit in the device embodiments in the present disclosure may be integrated into one processing module, or each unit may exist alone physically, or two or more modules or functional units are integrated into one module. The integrated modules can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium. The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.

[ Screen element control device ]

In order to solve the technical problem, an embodiment of the present disclosure further provides an electronic device. Referring now to fig. 4, a schematic diagram of an electronic device (e.g., a terminal device or a server in fig. 1) 400 suitable for implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.

As shown in fig. 4, the electronic device 400 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage device 406 into a Random Access Memory (RAM) 403. In the RAM403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM402, and the RAM403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.

Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 406 including, for example, magnetic tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.

In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 409, or from the storage means 406, or from the ROM 402. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 401.

It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.

The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.

The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising the at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects the internet protocol addresses from the at least two internet protocol addresses and returns the internet protocol addresses; receiving an internet protocol address returned by the node evaluation equipment; wherein the obtained internet protocol address indicates an edge node in the content distribution network.

Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from the at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.

Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".

In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

According to one or more embodiments of the present disclosure, there is provided a screen element control method, characterized in that the method includes:

the screen element control method is applied to the screen element for controlling and shielding the video picture when the terminal equipment with the touch screen plays the video, and comprises the following steps:

detecting a touch type of a user touching the touch screen;

identifying a touch action of a user touching the touch screen under the touch type;

controlling the currently displayed screen element change according to the touch action, so that the video picture shielded by the screen element is displayed clearly or in a non-shielding manner, wherein the screen element change manner at least comprises one of transparency change and/or position change;

detecting an ending action of the touch action, and determining the final state of the screen element change according to the ending action;

and displaying the screen mu element on the video picture according to the final state of the screen mu element change.

According to one or more embodiments of the present disclosure, there is provided a screen element control method characterized in that,

the touch type at least comprises one of single-point touch, two-point touch, three-point touch, four-point touch and five-point touch.

According to one or more embodiments of the present disclosure, there is provided a screen element control method characterized in that,

the ending action at least comprises one of the conditions of finger lifting, gesture pause and touch type change.

According to one or more embodiments of the present disclosure, there is provided a screen element control method characterized in that,

the final state of the screen element change includes at least one of maintaining a last changed state or restoring an initial state.

According to one or more embodiments of the present disclosure, there is provided a screen element control method characterized in that,

the touch action at least comprises one of zooming action, sliding action and clicking action.

According to one or more embodiments of the present disclosure, there is provided a screen element control method characterized in that,

the screen element transparency change comprises a transparency decrease or increase;

the zooming action of the zooming action corresponds to one condition that the transparency of the screen element is reduced or improved, and the zooming action of the zooming action corresponds to the other condition that the transparency of the screen element is reduced or improved.

According to one or more embodiments of the present disclosure, there is provided a screen element control method characterized in that,

the screen element position change comprises moving out of the screen or moving back to the screen;

the zooming in action of the zooming action corresponds to one of the screen element positions moving out of the screen or moving back to the screen, and the zooming out action of the zooming action corresponds to the other of the screen element positions moving out of the screen or moving back to the screen.

According to one or more embodiments of the present disclosure, there is provided a screen element control method characterized in that,

the sliding motion includes at least one of sliding up, sliding down, sliding left, and sliding right.

According to one or more embodiments of the present disclosure, there is provided a screen element control method characterized in that,

the clicking action at least comprises one of double-click action, single-click action, short-press action and long-press action.

According to one or more embodiments of the present disclosure, there is provided a screen element control method characterized in that,

any screen element control method is applied to android system application or iOS system application.

According to one or more embodiments of the present disclosure, there is provided a screen element control apparatus applied to a terminal device having a touch screen, the apparatus controlling a screen element blocking a video picture when playing a video, the apparatus including:

the touch detection module is used for detecting the touch type of the touch screen touched by a user;

the action recognition module recognizes a touch action of a user touching the touch screen under the touch type and detects an ending action of the touch action;

the control change module is used for controlling the change of the currently displayed screen element according to the touch action, so that the video picture shielded by the screen element is displayed clearly or in a non-shielding manner, and determining the final state of the change of the screen element according to the ending action, wherein the change manner of the screen element at least comprises one of transparency change and/or position change;

and the display module displays the screen mu elements on the video picture according to the final state of the change of the screen mu elements.

According to one or more embodiments of the present disclosure, there is provided a screen element control apparatus, characterized in that,

the touch type at least comprises one of single-point touch, two-point touch, three-point touch, four-point touch and five-point touch;

the touch action at least comprises one of zooming action, sliding action and clicking action;

the ending action at least comprises one of the conditions of lifting a finger, pausing a gesture and changing the touch type;

the final state of the screen element change includes at least one of maintaining a last changed state or restoring an initial state.

According to one or more embodiments of the present disclosure, there is provided a computer device including a memory in which a computer program is stored and a processor that implements the screen element control method according to any one of the above when the computer program is executed by the processor.

According to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium, characterized in that a computer program is stored thereon, which when executed by a processor implements the screen element control method as in any one of the above.

The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.

Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:存储器的数据存储方法、装置、电子设备和存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类