Page processing method and device

文档序号:1056796 发布日期:2020-10-13 浏览:5次 中文

阅读说明:本技术 页面处理方法和装置 (Page processing method and device ) 是由 冀文彬 于 2019-11-28 设计创作,主要内容包括:本公开公开了一种页面处理方法和装置,涉及数据处理领域。其中的方法包括:识别浏览页面的用户的身份信息;若识别出用户为视力障碍用户,则响应于用户触发页面内容,将页面内容以语音形式输出,以便用户获取页面内容。本公开便于视力障碍用户获取网页内容,进而提高用户操作页面的效率。(The disclosure discloses a page processing method and device, and relates to the field of data processing. The method comprises the following steps: identifying identity information of a user browsing a page; and if the user is identified as the vision-impaired user, responding to the user triggering page content, and outputting the page content in a voice mode so as to facilitate the user to acquire the page content. The method and the device are convenient for users with visual impairment to obtain the webpage content, and further improve the efficiency of the users in operating the webpage.)

1. A page processing method includes:

identifying identity information of a user browsing a page;

and if the user is identified as the vision-impaired user, responding to the user triggering page content, and outputting the page content in a voice form so as to facilitate the user to acquire the page content.

2. The page processing method according to claim 1, wherein if the user is identified as a visually impaired user, further comprising:

executing one or more of expanding a text region of the page content, reducing an image region of the page content, and reducing a video region of the page content.

3. The page processing method of claim 1, wherein outputting the page content in speech form comprises:

if the page content is image data, reading the image content in the image data based on an image recognition technology;

converting the image content into text data;

and converting the text data into voice data through a text-to-voice module, and outputting the voice data.

4. The page processing method of claim 1, wherein outputting the page content in speech form comprises:

and if the page content is text data, converting the text data into voice data through a text-to-voice module, and outputting the voice data.

5. The page processing method of claim 1, wherein outputting the page content in speech form comprises:

if the page content is video data, judging whether the video data contains voice data;

if the video data comprises voice data, outputting the voice data;

if the video data does not contain voice data, judging whether the video data contains at least one item of text data and image data;

if the video data contains text data, converting the text data into voice data through a text-to-voice module, and outputting the voice data;

if the video data contains image data, reading the image content in the image data based on an image recognition technology, converting the image content into character data, converting the character data into voice data through a character-to-voice module, and outputting the voice data.

6. The page processing method according to claim 3 or 5, wherein reading image content in the image data based on an image recognition technique comprises:

extracting image features in the image data based on the image recognition technique;

and comparing the image characteristics with the article characteristics in an article characteristic library to identify article information in the image data.

7. The page processing method according to any one of claims 1 to 5, wherein the identity information of the user is identified by at least one of:

responding to the screen reading software started by the user, and identifying the user as a vision disorder user;

acquiring the number of clicks of the user in the page, and if the number of clicks is greater than a threshold value, identifying that the user is a vision disorder user;

and acquiring a user name of the user logging in the page, and determining whether the user name corresponds to the user with the visual impairment by inquiring a user database.

8. The page processing method of claim 2, further comprising:

and adjusting the layout of the display area in the page to increase the display content.

9. A page processing apparatus comprising:

an identity discrimination unit configured to identify identity information of a user browsing a page;

and the information output unit is configured to respond to the user triggering page content if the user is identified as the vision-impaired user, and output the page content in a voice form so as to facilitate the user to acquire the page content.

10. The page processing apparatus of claim 9, further comprising:

a page adjusting unit configured to perform one or more of expanding a text region of the page content, reducing an image region of the page content, and reducing a video region of the page content if the user is identified as a visually impaired user.

11. A page processing apparatus comprising:

a memory; and

a processor coupled to the memory, the processor configured to perform the page processing method of any of claims 1 to 8 based on instructions stored in the memory.

12. A computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the page processing method of any one of claims 1 to 8.

Technical Field

The present disclosure relates to the field of data processing, and in particular, to a page processing method and apparatus.

Background

The existing shopping pages of the e-commerce are designed for normal users. For the people with visual impairment, the existing shopping page is used, and the shopping page information cannot be effectively acquired, so that the shopping process is inconvenient and the efficiency is low.

Disclosure of Invention

The technical problem to be solved by the present disclosure is to provide a page processing method and apparatus that can be suitable for users with visual impairment to obtain page information.

According to an aspect of the present disclosure, a page processing method is provided, including: identifying identity information of a user browsing a page; and if the user is identified as the vision-impaired user, responding to the user triggering page content, and outputting the page content in a voice mode so as to facilitate the user to acquire the page content.

In some embodiments, if the user is identified as a visually impaired user, the method further comprises: one or more of expanding a text region of the page content, reducing an image region of the page content, and reducing a video region of the page content is performed.

In some embodiments, outputting the page content in speech form includes: if the page content is image data, reading the image content in the image data based on an image recognition technology; converting the image content into text data; and converting the text data into voice data through a text-to-voice module, and outputting the voice data.

In some embodiments, outputting the page content in speech form includes: if the page content is text data, the text data is converted into voice data through a text-to-voice module, and the voice data is output.

In some embodiments, outputting the page content in speech form includes: if the page content is video data, judging whether the video data contains voice data; if the video data contains voice data, outputting the voice data; if the video data does not contain the voice data, judging whether the video data contains at least one item of character data and image data; if the video data contains text data, converting the text data into voice data through a text-to-voice module, and outputting the voice data; if the video data contains image data, reading the image content in the image data based on an image recognition technology, converting the image content into character data, converting the character data into voice data through a character-to-voice module, and outputting the voice data.

In some embodiments, reading image content in the image data based on the image recognition technique comprises: extracting image features in the image data based on an image recognition technology; and comparing the image characteristics with the article characteristics in the article characteristic library to identify the article information in the image data.

In some embodiments, the identity information of the user is identified by at least one of: responding to the screen reading software started by the user, and identifying the user as a vision disorder user; acquiring the number of clicks of a user in a page, and if the number of clicks is greater than a threshold value, identifying the user as a vision disorder user; and acquiring a user name of a user login page, and determining whether the user name corresponds to the user with the visual impairment by inquiring a user database.

In some embodiments, the layout of the presentation area in the page is adjusted to increase the presentation content.

According to another aspect of the present disclosure, there is also provided a page processing apparatus, including: an identity discrimination unit configured to identify identity information of a user browsing a page; and the information output unit is configured to respond to the triggering of the page content by the user and output the page content in a voice form so as to facilitate the user to acquire the page content if the user is identified as the vision-impaired user.

In some embodiments, the page adjusting unit is configured to perform one or more of enlarging a text region of the page content, reducing an image region of the page content, and reducing a video region of the page content if the user is identified as the visually impaired user.

According to another aspect of the present disclosure, there is also provided a page processing apparatus, including: a memory; and a processor coupled to the memory, the processor configured to perform the page processing method as described above based on instructions stored in the memory.

According to another aspect of the present disclosure, a computer-readable storage medium is also proposed, on which computer program instructions are stored, which when executed by a processor implement the above-mentioned page processing method.

Compared with the prior art, in the embodiment of the disclosure, if the user browsing the webpage is identified as the vision-impaired user, the webpage content triggered by the user is output in a voice mode, so that the vision-impaired user can conveniently obtain the webpage content, and the efficiency of the user operating the webpage is further improved.

Other features of the present disclosure and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.

The present disclosure may be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:

fig. 1 is a flow diagram of some embodiments of a page processing method of the present disclosure.

Fig. 2 is a flowchart illustrating another embodiment of a page processing method according to the present disclosure.

Fig. 3 is a schematic page layout of the present disclosure.

Fig. 4 is a schematic page layout diagram of the present disclosure.

Fig. 5 is a flowchart illustrating another embodiment of a page processing method according to the present disclosure.

Fig. 6 is a flowchart illustrating another embodiment of a page processing method according to the present disclosure.

Fig. 7 is a flowchart illustrating another embodiment of a page processing method according to the present disclosure.

Fig. 8 is a schematic structural diagram of some embodiments of a page processing apparatus according to the present disclosure.

Fig. 9 is a schematic structural diagram of another embodiment of a page processing apparatus according to the present disclosure.

Fig. 10 is a schematic structural diagram of another embodiment of a page processing apparatus according to the present disclosure.

Fig. 11 is a schematic structural diagram of another embodiment of a page processing apparatus according to the present disclosure.

Detailed Description

Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.

Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.

The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.

Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.

In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.

It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.

For the purpose of promoting a better understanding of the objects, aspects and advantages of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings.

Fig. 1 is a flow diagram of some embodiments of a page processing method of the present disclosure.

At step 110, identity information of a user browsing the page is identified. The page is for example a shopping page.

In some embodiments, the user is identified as a visually impaired user in response to the user turning on the screen-reading software. For example, if the user starts certain reading auxiliary voice reading software, the user is determined to be a vision-impaired user.

In some embodiments, the number of clicks of the user in the page is obtained, and if the number of clicks is greater than a threshold value, the user is identified as a vision-impaired user. For example, if the user uses a click action for most of the commodity name, commodity picture, price, etc. of each module, the user is determined to be a visually impaired user.

In some embodiments, a user name of a user login page is obtained, and whether the user name corresponds to the user with the visual impairment is determined by querying a user database. For example, the user name of the user is queried, and then the social identity associated with the user name, other data materials recording the user identity are queried to determine whether the user is a visually impaired user.

The visually impaired user in the above embodiment refers to a visually impaired user having reading ability.

In step 120, if the user is identified as a vision-impaired user, the page content is output in a voice form in response to the user triggering the page content, so that the user can obtain the page content.

For example, when the user touches the image, the text or the video, the image content, the text content or the video content is presented to the user in the form of audio books.

In the embodiment, if the user browsing the webpage is identified as the vision-impaired user, the webpage content triggered by the user is output in the form of voice, so that the vision-impaired user can conveniently acquire the webpage content, and the efficiency of the user in operating the webpage is further improved.

Fig. 2 is a flowchart illustrating another embodiment of a page processing method according to the present disclosure.

At step 210, identity information of a user browsing the page is identified.

In step 220, if the user is identified as a vision-impaired user, one or more of expanding the text region of the page content, reducing the image region of the page content, and reducing the video region of the page content is performed in response to the user triggering the page content.

For example, as shown in fig. 3 and 4, the font size of the text is increased by a predetermined multiple, for example, by 1-5 times, and the font can also be changed, so that the text is easy to click and is convenient for the user to recognize. For another example, the picture or video area is reduced to 0-100% of the original size. The text, the picture and the video area can be enlarged or reduced according to the display size of the equipment.

In some embodiments, as shown in fig. 3, after the sizes of the text, the picture and the video are changed, the number of the items displayed by the page layout is unchanged.

In other embodiments, as shown in FIG. 4, the layout of the presentation area in the page is adjusted to increase the presentation. For example, there is one page showing 6 items, adjusted to one page showing 12 items, etc.

Because the users with visual impairment are inconvenient to operate when turning pages, more contents are displayed to the users in one page, the page turning actions of the users are reduced, the users can know more information of related articles, and shopping is convenient to select.

Fig. 5 is a flowchart illustrating another embodiment of a page processing method according to the present disclosure.

At step 510, identity information of a user browsing the page is identified.

In step 520, if the user is identified as a visually impaired user, the page display content is adjusted. For example, the text font size is increased, the picture is reduced, and the video area is reduced.

In step 530, in response to the text data of the user triggered page, the text data is converted into voice data through the text-to-voice module, and the voice data is output.

For example, a user clicks on a text introduction of an item, text data is converted into a computer language, and then the computer language is transmitted to a text-to-speech module as an electric signal, and the text-to-speech module converts the electric signal into a magnetic signal, that is, speech data is output. The text-to-speech module may be an existing speech conversion software.

In the embodiment, when the user browsing the page is identified to be the user with the visual disorder, the page display content is adjusted according to the rule, and the character data clicked by the user is converted into the voice data, so that the user can conveniently know the article information, and the shopping experience and efficiency of the user are improved.

Fig. 6 is a flowchart illustrating another embodiment of a page processing method according to the present disclosure.

At step 610, identity information of a user browsing the page is identified.

In step 620, if the user is identified as a visually impaired user, the page display content is adjusted.

In step 630, in response to the image data of the user triggered page, image content in the image data is read based on image recognition technology.

In some embodiments, image features in the image data are extracted based on image recognition techniques; and comparing the image characteristics with the article characteristics in the article characteristic library to identify the article information in the image data.

For example, a user clicks a shampoo picture, and since both the shampoo package and the cola package are bottle-shaped packages, a commodity type commodity feature identification library can be established in advance, that is, a commodity AI identification library of different commodity types is constructed by using the characteristic shapes, colors and structures of the commodity in the picture, and then the identified image features are compared with the article features in the feature identification library to determine the article information in the image.

At step 640, the image content is converted to textual data.

In step 650, the text data is converted into voice data by the text-to-voice module, and the voice data is output.

In the embodiment, when the user browsing the page is identified as the user with the visual impairment, the display content of the page is adjusted according to the rule, and the picture data clicked by the user is converted into the voice data, so that the user can quickly understand the information of the articles in the picture.

Fig. 7 is a flowchart illustrating another embodiment of a page processing method according to the present disclosure.

At step 710, identity information of a user browsing the page is identified.

In step 720, if the user is identified as a vision-impaired user, adjusting the page display content.

In step 730, in response to the video data of the user triggered page, it is determined whether the video data includes voice data, if so, step 740 is performed, otherwise, step 750 is performed.

At step 740, the voice data is output.

In some embodiments, whether the voice data is data related to the article may be identified in advance, if so, the voice data may be directly output, and if not, the voice data may be filtered.

In step 750, it is determined whether the video data includes text data or image data, if so, step 760 is executed, and if so, step 770 is executed.

In step 760, the text data is converted into voice data by the text-to-voice module, and the voice data is output.

In step 770, the image content in the image data is read based on the image recognition technology, the image content is converted into text data, the text data is converted into voice data through the text-to-voice module, and the voice data is output.

In the embodiment, when the user browsing the page is identified as the vision-impaired user, the page display content is adjusted according to the rule, and the content in the video data clicked by the user is output in a voice form, so that the user can acquire the image-text voice information of the article in an all-around manner.

Fig. 8 is a schematic structural diagram of some embodiments of a page processing apparatus according to the present disclosure. The apparatus includes an identity determination unit 810 and an information output unit 820.

The identity discrimination unit 810 is configured to identify identity information of a user browsing a page.

For example, in response to the user opening screen reading software, the user is identified as a visually impaired user; acquiring the number of clicks of a user in a page, and if the number of clicks is greater than a threshold value, identifying the user as a vision disorder user; or acquiring a user name of a user login page, and determining whether the user name corresponds to the user with the visual impairment by inquiring a user database.

The information output unit 820 is configured to output the page content in a voice form in response to the user triggering the page content so that the user acquires the page content, if the user is recognized as a vision-impaired user.

For example, when the user touches the image, the text or the video, the image content, the text content or the video content is presented to the user in the form of audio books.

In some embodiments, if the page content is image data, reading the image content in the image data based on an image recognition technology; converting the image content into text data; and converting the text data into voice data through a text-to-voice module, and outputting the voice data. Extracting image features in the image data based on an image recognition technology; and comparing the image characteristics with the article characteristics in the article characteristic library to identify the article information in the image data.

In other embodiments, if the page content is text data, the text data is converted into voice data by the text-to-voice module, and the voice data is output.

In other embodiments, if the page content is video data, determining whether the video data contains voice data; if the video data contains voice data, outputting the voice data; if the video data does not contain the voice data, judging whether the video data contains at least one item of character data and image data; if the video data contains text data, converting the text data into voice data through a text-to-voice module, and outputting the voice data; if the video data comprises video data, reading image content in the image data based on an image recognition technology, converting the image content into character data, converting the character data into voice data through a character-to-voice module, and outputting the voice data.

In the embodiment, if the user browsing the webpage is identified as the vision-impaired user, the webpage content triggered by the user is output in the form of voice, so that the vision-impaired user can conveniently acquire the webpage content, and the efficiency of the user in operating the webpage is further improved.

In other embodiments of the present disclosure, the apparatus further includes a page adjusting unit 910 configured to perform one or more of enlarging a text region of the page content, reducing an image region of the page content, and reducing a video region of the page content if the user is identified as the vision-impaired user.

In some embodiments, the layout of the presentation area in the page is adjusted to increase the presentation content.

In the embodiment, when the user with eyesight impairment operates the page, the display content of the page is adjusted, so that the action of page turning of the user is reduced, and the user can know more information of related articles conveniently.

Fig. 10 is a schematic structural diagram of another embodiment of a page processing apparatus according to the present disclosure. The apparatus includes a memory 1010 and a processor 1020, wherein: the memory 1010 may be a magnetic disk, flash memory, or any other non-volatile storage medium. The memory is used to store instructions in the embodiments corresponding to fig. 1, 2, 5-7. The processor 1020, coupled to the memory 1010, may be implemented as one or more integrated circuits, such as a microprocessor or microcontroller. The processor 1020 is configured to execute instructions stored in a memory.

In some embodiments, as also shown in fig. 11, the apparatus 1100 includes a memory 1110 and a processor 1120. Processor 1120 is coupled to memory 1110 by a BUS 1130. The device 1100 may also be coupled to an external storage device 1150 via a storage interface 1140 for retrieving external data, and may also be coupled to a network or another computer system (not shown) via a network interface 1160, which will not be described in detail herein.

In the embodiment, the data instruction is stored in the memory, and the instruction is processed by the processor, so that the user with visual impairment can conveniently obtain the webpage content, and the efficiency of the user in operating the webpage is improved.

In further embodiments, a computer-readable storage medium has stored thereon computer program instructions which, when executed by a processor, implement the steps of the method in the embodiments corresponding to fig. 1, 2, 5-7. As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, apparatus, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

Thus far, the present disclosure has been described in detail. Some details that are well known in the art have not been described in order to avoid obscuring the concepts of the present disclosure. It will be fully apparent to those skilled in the art from the foregoing description how to practice the presently disclosed embodiments.

Although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the foregoing examples are for purposes of illustration only and are not intended to limit the scope of the present disclosure. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the present disclosure. The scope of the present disclosure is defined by the appended claims.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:优化音频处理方法、装置、终端及可读存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类