Vision detection method, device, equipment and storage medium

文档序号:592028 发布日期:2021-05-28 浏览:18次 中文

阅读说明:本技术 视力检测方法、装置、设备及存储介质 (Vision detection method, device, equipment and storage medium ) 是由 胡风硕 王镜茹 贾红红 于 2021-03-09 设计创作,主要内容包括:本申请实施例提供了一种视力检测方法、装置、设备及存储介质。该视力检测方法包括:生成并展示第i级视觉可识别信息后,进行周期性的检测;其中,一个周期的检测过程包括:获取用户针对第i级视觉可识别信息的肢体影像信息;确定肢体影像信息与第i级视觉可识别信息是否相匹配;若满足检测结束条件,则生成并展示与当前级视觉可识别信息相对应的检测结果信息。本申请实施例实现了视力检测的机器检测方式,可以取代当前人工视力检测,有效降低人工检测的成本;机器检测利于由家庭电子设备甚至个人随身电子设备实现,对检测场所的要求极低,从而有效克服了人工检测存在的检测地域局限性问题,机器检测的灵活性大、趣味性强。(The embodiment of the application provides a vision detection method, a vision detection device, equipment and a storage medium. The vision detection method comprises the following steps: after the i-th-level visual identifiable information is generated and displayed, periodic detection is carried out; wherein, the detection process of one cycle includes: acquiring limb image information of a user aiming at the ith-level visual identifiable information; determining whether the limb image information matches the ith-level visually identifiable information; and if the detection end condition is met, generating and displaying detection result information corresponding to the current-stage visual identifiable information. The embodiment of the application realizes a machine detection mode of vision detection, can replace the current manual vision detection, and effectively reduces the cost of manual detection; the machine detection is favorably realized by household electronic equipment or even personal portable electronic equipment, and has extremely low requirement on detection places, thereby effectively overcoming the problem of detection region limitation existing in manual detection, and having high flexibility and strong interest of machine detection.)

1. A method of vision testing, comprising:

after the i-th-level visual identifiable information is generated and displayed, periodic detection is carried out;

wherein, the detection process of one cycle includes:

acquiring limb image information of the user aiming at the ith-level visual identifiable information;

determining whether the limb image information matches the i-th level visually identifiable information; if the visual identification information is matched with the visual identification information, generating and displaying i + 1-th-level visual identification information, and detecting in the next period until a detection end condition is met; if not, generating and displaying the ith-level visual identification information and/or the (i-1) th-level visual identification information in the other direction, and carrying out subsequent periodic detection until the detection end condition is met;

if the detection end condition is met, generating and displaying detection result information corresponding to the current-level visual identifiable information; and i is a positive integer, and the evaluation superiority of the i-1 th-level visual recognizable information, the i-th-level visual recognizable information and the i +1 th-level visual recognizable information is changed from inferiority to superiority.

2. The vision testing method of claim 1, wherein the generating and displaying the i-th level visually recognizable information and/or the i-1 th level visually recognizable information of another direction and performing the subsequent periodic testing until the testing end condition is satisfied comprises:

generating and presenting the ith-level visually recognizable information of the other direction;

acquiring image information of another limb of the ith-level visually recognizable information of the user in the other direction;

if the image information of the other limb is confirmed to be matched with the ith-level visually identifiable information in the other direction, generating and displaying the (i + 1) -level visually identifiable information, and carrying out detection in the next period until the detection end condition is met;

and if the image information of the other limb is not matched with the ith-level visually identifiable information in the other direction, generating and displaying the (i-1) -level visually identifiable information, and carrying out detection in the next period until the detection end condition is met.

3. A vision testing method according to claim 2, wherein said generating and presenting said i-1 th-level visually recognizable information and performing the next cycle of testing until said testing end condition is satisfied includes:

generating and displaying the i-1 th level visually recognizable information;

acquiring further limb image information of the user aiming at the i-1 th-level visual identifiable information;

if the image information of the other limb is determined not to be matched with the i-1 level visual identifiable information, determining whether the unmatched times reach set times or not; and if the set times are reached, determining that the detection end condition is met.

4. A vision testing method according to claim 2, wherein said generating and presenting said i-1 th-level visually recognizable information and performing the next cycle of testing until said testing end condition is satisfied includes:

generating and displaying the i-1 th level visually recognizable information;

acquiring further limb image information of the user aiming at the i-1 th-level visual identifiable information;

if the fact that the image information of the other limb is not matched with the i-1 level visual identifiable information is determined, whether the evaluation goodness and badness of the i-1 level visual identifiable information reach the best design evaluation goodness and badness is determined; and if the worst design evaluation quality is achieved, determining that the detection end condition is met.

5. A vision testing method according to claim 1, wherein said generating and presenting i +1 th-level visually recognizable information and performing the testing of the next cycle until the testing end condition is satisfied includes:

generating and displaying the i +1 th-level visually recognizable information;

acquiring further limb image information of the user aiming at the i +1 th-level visually recognizable information;

if the fact that the still another limb image information is matched with the i +1 th-level visual identifiable information is determined, whether the evaluation quality of the i +1 th-level visual identifiable information reaches the optimal design evaluation quality is determined; and if the optimal design evaluation quality is achieved, determining that the detection end condition is met.

6. The vision testing method of claim 1, wherein said generating and presenting i-th level visually identifiable information comprises: generating and displaying ith-level visual identifiable information and answer information for the user to select;

the acquiring of the limb image information of the user aiming at the ith-level visually recognizable information comprises: and acquiring the limb image information of the ith-level visually identifiable information for the user, and confirming that the limb image information which can be mapped to the answer information and meets the set time is effective limb image information.

7. The vision testing method of any one of claims 1-6, wherein the limb image information includes: at least one of finger pointing information, arm pointing information, leg pointing information, and head pointing information.

8. A vision testing device, comprising:

the visual identifiable information display module is used for generating and displaying the ith-level visual identifiable information; if the limb image information of the user aiming at the ith-level visual identifiable information is matched with the ith-level visual identifiable information, generating and displaying the (i + 1) th-level visual identifiable information; if not, generating and displaying the ith-level visual identification information and/or the (i-1) th-level visual identification information of the other direction; i is a positive integer, and the evaluation superiority of the i-1 th-level visual recognizable information, the i-th-level visual recognizable information and the i +1 th-level visual recognizable information is changed from inferiority to superiority; if the detection end condition is met, generating and displaying detection result information corresponding to the current-level visual identifiable information;

the body image information acquisition module is used for acquiring body image information of the user aiming at the ith-level visual identifiable information;

and the information processing module is used for determining whether the limb image information is matched with the ith-level visually identifiable information until a detection ending condition is met.

9. A vision testing device, comprising:

a display;

a camera;

the controller is in signal connection with the display and the camera respectively; the controller is configured to perform the vision testing method of any of claims 1-7.

10. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, implements the vision detection method of any one of claims 1-7.

Technical Field

The present application relates to the field of vision testing technologies, and in particular, to a vision testing method, apparatus, device, and storage medium.

Background

Visual acuity testing is an important approach to assessing visual acuity status.

The existing vision test mainly adopts a manual test mode, namely, an optometrist designates a pattern or a mark on a professional optometry device, a tested person speaks an answer after observation and recognition, and the optometrist gives a test result according to the answer to the wrong optometry. Moreover, the subject needs to go to a hospital, a spectacle store, or other places having professional optometry equipment to be detected.

Therefore, the existing vision detection mode has the defects of high manual detection cost, large detection region limitation, poor detection experience and the like.

Disclosure of Invention

The present application provides a method, an apparatus, a device and a storage medium for eyesight detection, which are used to solve at least some of the above technical problems in the prior art.

In a first aspect, an embodiment of the present application provides a vision testing method, including:

after the i-th-level visual identifiable information is generated and displayed, periodic detection is carried out;

wherein, the detection process of one cycle includes:

acquiring limb image information of a user aiming at the ith-level visual identifiable information;

determining whether the limb image information matches the ith-level visually identifiable information; if the visual identification information is matched with the visual identification information, generating and displaying i + 1-th-level visual identification information, and detecting in the next period until a detection end condition is met; if not, generating and displaying the ith-level visual identification information and/or the (i-1) th-level visual identification information in the other direction, and carrying out subsequent periodic detection until the detection end condition is met;

if the detection end condition is met, generating and displaying detection result information corresponding to the current-level visual identifiable information; and i is a positive integer, and the evaluation goodness of the i-1 st level visual recognizable information, the i-th level visual recognizable information and the i +1 st level visual recognizable information is changed from poor to good.

In a second aspect, an embodiment of the present application provides a vision testing apparatus, including:

the visual identifiable information display module is used for generating and displaying the ith-level visual identifiable information; if the limb image information of the user aiming at the ith-level visual identifiable information is matched with the ith-level visual identifiable information, generating and displaying the (i + 1) th-level visual identifiable information; if not, generating and displaying the ith-level visual identification information and/or the (i-1) th-level visual identification information of the other direction; the i is a positive integer, and the evaluation goodness of the i-1 th-level visual recognizable information, the i-th-level visual recognizable information and the i +1 th-level visual recognizable information is changed from poor to good; if the detection end condition is met, generating and displaying detection result information corresponding to the current-level visual identifiable information;

the body image information acquisition module is used for acquiring body image information of the user aiming at the ith-level visual identifiable information;

and the information processing module is used for determining whether the limb image information is matched with the ith-level visually recognizable information until the detection ending condition is met.

In a third aspect, an embodiment of the present application provides a vision testing apparatus, including:

a display;

a camera;

the controller is respectively in signal connection with the display and the camera; the controller is adapted to perform the vision testing method as provided in the first aspect.

In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the vision detecting method provided in the first aspect.

The beneficial technical effects brought by the technical scheme provided by the embodiment of the application comprise: the visual identification information is displayed for the user, the limb image information of the user aiming at the visual identification information is acquired, and the visual detection result information is output according to the limb image information and the visual identification information and the analysis rule provided by the application, so that the machine detection mode of visual detection is facilitated. The machine detection can replace the current manual vision detection, so that the cost of manual detection can be effectively reduced; the machine detection is beneficial to being realized by family electronic equipment or even personal portable electronic equipment, and has extremely low requirement on detection places, thereby effectively overcoming the problem of detection region limitation existing in manual detection, and the machine detection has high flexibility and strong interest, and the detection experience obtained by a user is better.

Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.

Drawings

The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:

fig. 1 is a schematic flow chart of a vision testing method according to an embodiment of the present disclosure;

fig. 2 is a schematic flow chart illustrating that, in a vision inspection method provided in an embodiment of the present application, another i-th-level visually recognizable information and/or i-1-level visually recognizable information is generated and displayed, and inspection is performed in subsequent cycles until an inspection end condition is satisfied;

fig. 3 is a schematic flowchart of a first implementation manner in which, in a vision inspection method provided in an embodiment of the present application, i-1 th-level visually recognizable information is generated and displayed, and inspection in a next period is performed until an inspection end condition is satisfied;

fig. 4 is a schematic flowchart of a second implementation manner in which, in the vision inspection method provided in the embodiment of the present application, i-1 th-level visually recognizable information is generated and displayed, and inspection in a next period is performed until an inspection end condition is satisfied;

fig. 5 is a schematic flow chart illustrating a method for detecting eyesight according to an embodiment of the present application, in which i +1 th-level visually recognizable information is generated and displayed, and detection is performed in a next period until a detection end condition is satisfied;

FIG. 6 is a schematic flow chart of another vision testing method provided in the embodiments of the present application;

fig. 7 is a schematic diagram of a frame of a vision testing apparatus according to an embodiment of the present application;

fig. 8 is a schematic frame diagram of a vision testing apparatus according to an embodiment of the present application.

In the figure:

100-a vision detection device; 110-a display; 120-a camera; 130-a controller;

200-vision testing device; 210-a visually identifiable information presentation module; 220-a limb image information acquisition module; 230-information processing module.

Detailed Description

Reference will now be made in detail to the present application, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar parts or parts having the same or similar functions throughout. In addition, if a detailed description of the known art is not necessary for illustrating the features of the present application, it is omitted. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.

It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.

As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.

The inventors of the present application have conducted research and found that Gesture recognition (getterrestration) is one issue of recognizing human gestures through a mathematical algorithm. Gesture recognition may come from movements of various parts of a person's body, such as movements of the face and hands. A user can use simple gestures to control or interact with an electronic device, letting a computer understand human behavior without touching them. Gesture recognition can be seen as a way of computationally solving human language, building a richer bridge between machine and human than the original text user interface or even graphical user interface. Therefore, the combination of a display technology and a computer vision algorithm technology based on gesture recognition can be considered to realize a machine detection mode of vision detection, and the problems that the existing vision detection mode is high in manual detection cost, large in detection region limitation, poor in detection experience and the like are solved.

The application provides a vision detection method, a vision detection device, equipment and a storage medium, and aims to solve the technical problems in the prior art.

The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments.

The embodiment of the present application provides a vision testing device 100, a schematic frame diagram of which is shown in fig. 8, including but not limited to: a display 110, a camera 120, and a controller 130.

The controller 130 is in signal connection with the display 110 and the camera 120, respectively.

The controller 130 is configured to execute any one of the vision testing methods provided in the embodiments of the present application, which will be described in detail below and therefore will not be described herein again.

In this embodiment, the display 110 may be used to generate and present visually recognizable information to the user, as well as presenting detection result information. The camera 120 may be used to obtain the body image information of the user for visually recognizable information. The controller 130 may be configured to control the display 110 and the camera 120 to perform the aforementioned actions, and may output the vision test result information according to the limb image information and the visually recognizable information and according to the analysis rule in the vision test method provided in the present application.

Therefore, the vision test equipment 100 provided by the embodiment can realize machine test of vision test, can replace the current manual vision test, and can effectively reduce the cost of manual test.

Optionally, the vision detection apparatus 100 provided by this embodiment can be a home electronic apparatus, or a personal portable electronic apparatus, and the requirement for the detection location is extremely low during operation, so that the problem of detection region limitation in manual detection is effectively overcome, the detection flexibility of the vision detection apparatus 100 is high, the interestingness is high, and the detection experience obtained by the user is better.

Alternatively, the vision detecting device 100 may be at least one of any product or component with a display function, such as a smart television, a digital photo frame, a digital flower screen, an advertisement machine, a mobile phone, a smart watch, and a tablet computer.

Alternatively, the visually recognizable information may be visually detectable characters, such as E-or C-symbols, or other patterns.

Alternatively, the evaluation quality of the visually recognizable information may be a size of a character label, and the smaller the size of the character label, the more excellent the evaluation quality; conversely, the smaller the size of the character, the better the evaluation. The rating of the visually recognizable information may also be other discriminative ratings, such as: the density of the pattern lines, the similarity of the pattern color to the background color, and the like. Correspondingly, the thicker the pattern lines are, the better the evaluation quality is; the more similar the pattern color and the background color are, the more excellent the evaluation is.

In some possible embodiments, the vision testing device 100 may also include a memory. The controller 130 and the memory are electrically connected, such as by a bus. Alternatively, the controller 130 may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The controller 130 may also be a combination of computing functions, e.g., comprising one or more microprocessors, a combination of DSPs and microprocessors, and the like.

Alternatively, the bus may include a path that carries information between the aforementioned components. The bus may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc.

Alternatively, the Memory may be, but is not limited to, a ROM (Read-Only Memory) or other type of static storage device that can store static information and instructions, a RAM (random access Memory) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read-Only Memory) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.

In some possible embodiments, the vision testing device 100 may also include a monitoring unit. The monitoring unit may be used to monitor the detected distance between the user and the display 110. The controller 130 determines whether the current vision test result information is valid through the detection distance obtained by the monitoring unit; alternatively, the controller 130 adaptively adjusts the evaluation quality of the visually recognizable information displayed on the display 110 through the detected distance obtained by the monitoring unit to compensate or correct the detection error caused by the error of the detected distance.

In some possible embodiments, the vision testing device 100 may also include a transceiver. The transceiver may be used for reception and transmission of signals. The transceiver may allow the controller 130 of the vision detecting device 100 to perform wireless or wired communication with other devices or the cloud end to exchange data, for example, to facilitate the vision detecting device 100 to upload vision detecting result information to the other devices or the cloud end, or to facilitate the vision detecting device 100 to download update packages from the other devices or the cloud end, to update materials of visually recognizable information, and the like. It should be noted that the number of the transceivers in practical application is not limited to one.

In some possible embodiments, the vision testing device 100 may also include a spare input unit. The spare input unit may be used to receive input numeric, character, image and/or sound information or to generate key signal inputs related to user settings and function control of the controller 130. The alternate input units may include, but are not limited to, one or more of a touch screen, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, a camera, a microphone, and the like.

In some possible embodiments, the vision testing device 100 may include other output units in addition to the aforementioned display 110 for presenting visually identifiable information. Other output units may be used to output or present information processed by the controller 130. Other output units may include, but are not limited to, one or more of a display, a speaker, a vibrator, and the like.

It will be appreciated by those skilled in the art that the controller 130 of the vision testing device 100 provided in the embodiments of the present application may be specially designed and manufactured for the required purposes, or may comprise a known device in a general-purpose computer. These devices have stored therein computer programs that are selectively activated or reconfigured. Such a computer program may be stored in a device (e.g., computer) readable medium or in any type of medium suitable for storing electronic instructions and respectively coupled to a bus.

Based on the same inventive concept, the embodiment of the application provides a vision detection method, which comprises the following steps: and after the ith-level visually recognizable information is generated and displayed, carrying out periodic detection.

Alternatively, the controller 130 in the vision inspection apparatus 100 provided by the foregoing embodiment generates the i-th-level visually recognizable information and controls the display 110 to present the i-th-level visually recognizable information.

As shown in FIG. 1, the detection process of one cycle includes, but is not limited to, steps S101-S103:

s101: and acquiring the limb image information of the user aiming at the ith-level visual identifiable information.

Alternatively, the body image information of the user for the i-th visually recognizable information is acquired by the camera 120 in the vision detecting apparatus 100 provided in the foregoing embodiment, and is sent to the controller 130.

S102: determining whether the limb image information matches the ith-level visually identifiable information; if the visual identification information is matched with the visual identification information, generating and displaying i + 1-th-level visual identification information, and detecting in the next period until a detection end condition is met; and if not, generating and displaying the ith-level visual identification information and/or the (i-1) th-level visual identification information of the other direction, and carrying out detection of a subsequent cycle until a detection end condition is met.

Alternatively, the controller 130 in the vision inspection apparatus 100 provided by the foregoing embodiment determines whether the limb image information matches the i-th-level visually recognizable information.

S103: if the detection end condition is met, generating and displaying detection result information corresponding to the current-level visual identifiable information; and i is a positive integer, and the evaluation goodness of the i-1 st level visual recognizable information, the i-th level visual recognizable information and the i +1 st level visual recognizable information is changed from poor to good.

Alternatively, the controller 130 in the vision detecting device 100 provided by the foregoing embodiment generates the detection result information corresponding to the current-stage visually recognizable information and controls the display 110 to present the detection result information.

Alternatively, the evaluation quality of the visually recognizable information may be a size of a character label, and the smaller the size of the character label, the more excellent the evaluation quality; conversely, the smaller the size of the character, the better the evaluation. The rating of the visually recognizable information may also be other discriminative ratings, such as: the density of the pattern lines, the similarity of the pattern color to the background color, and the like. Correspondingly, the thicker the pattern lines are, the better the evaluation quality is; the more similar the pattern color and the background color are, the more excellent the evaluation is.

According to the vision detection method provided by the embodiment, the visual identification information is displayed for the user, the limb image information of the user aiming at the visual identification information is acquired, and the vision detection result information is output according to the limb image information and the visual identification information and the analysis rule provided by the application, so that the machine detection mode of vision detection is favorably realized. The machine detection can replace the current manual vision detection, so that the cost of manual detection can be effectively reduced; the machine detection is beneficial to being realized by family electronic equipment or even personal portable electronic equipment, and has extremely low requirement on detection places, thereby effectively overcoming the problem of detection region limitation existing in manual detection, and the machine detection has high flexibility and strong interest, and the detection experience obtained by a user is better.

Optionally, the limb image information includes, but is not limited to: at least one of finger pointing information, arm pointing information, leg pointing information, and head pointing information. For example, the user bends the index finger, middle finger, ring finger, and small finger into a fist shape, and holds the thumb upright, and at this time, the direction of the thumb (left, right, upward, downward, or the like) is used as the direction information in the body image information.

In some possible embodiments, in the step S102, the ith-level visually recognizable information and/or the i-1-level visually recognizable information in another direction is generated and displayed, and the detection of the subsequent cycle is performed until the detection end condition is met, as shown in fig. 2, including but not limited to the steps S201 to S204:

s201: and generating and displaying the ith-level visually recognizable information of the other direction.

Alternatively, another i-th-level visually recognizable information is generated by the controller 130 in the vision detecting device 100 provided by the foregoing embodiment, and the display 110 is controlled to present another i-th-level visually recognizable information. For example, the ith-level visually recognizable information is "E", another ith-level visually recognizable information is "", and the "E" is the same size as "".

S202: acquiring the image information of the other limb of the ith-level visually recognizable information of the user aiming at the other direction.

Alternatively, the camera 120 in the vision inspection apparatus 100 provided in the foregoing embodiment acquires another body image information of the user for another i-th-level visually recognizable information, and transmits the another body image information to the controller 130.

S203: and if the image information of the other limb is confirmed to be matched with the ith-level visually recognizable information in the other direction, generating and displaying the (i + 1) -level visually recognizable information, and carrying out detection in the next period until the detection end condition is met.

S204: and if the image information of the other limb is not matched with the ith-level visually identifiable information in the other direction, generating and displaying the ith-1-level visually identifiable information, and carrying out detection in the next period until the detection ending condition is met.

Alternatively, both step S202 and step S203 may determine whether the other limb image information matches the ith-level visually recognizable information of the other direction by the controller 130 in the vision detecting apparatus 100 provided in the foregoing embodiment.

In this embodiment, after it is determined that the limb image information does not match the i-th level visual recognizable information, the i-th level visual recognizable information in the other direction is provided with the same evaluation goodness and badness, rather than immediately reducing the evaluation goodness and badness level of the visual recognizable information, which is beneficial to providing a user with a selection opportunity again, and can effectively reduce negative effects caused by invalid detection judgment due to user misoperation or failure of the vision detection device 100 to acquire the limb image information. And the times of unnecessarily reducing the evaluation quality level of the visual recognizable information are reduced, the vision detection period can be shortened, and the vision detection efficiency is improved.

In some possible embodiments, in the step S204, the i-1 th-level visually recognizable information is generated and displayed, and the detection of the next cycle is performed until the detection end condition is satisfied, as shown in fig. 3, including but not limited to the steps S301 to S303:

s301: visually recognizable information of level i-1 is generated and presented.

Alternatively, the controller 130 in the vision inspection device 100 provided by the foregoing embodiment generates visually recognizable information of the i-1 st level and controls the display 110 to present the visually recognizable information of the i-1 st level. For example, the ith-level visually recognizable information is "E", and the i-1 th-level visually recognizable information may be "E" or "" which is one size smaller than the ith-level visually recognizable information.

S302: and acquiring further limb image information of the user aiming at the i-1 level visual identifiable information.

Alternatively, the camera 120 in the vision inspection apparatus 100 provided in the foregoing embodiment acquires further limb image information of the user with respect to the i-1 st visually recognizable information and sends the further limb image information to the controller 130.

S303: if the image information of the other limb is determined not to be matched with the i-1 level visual identifiable information, determining whether the unmatched times reach the set times or not; and if the set times are reached, determining that the detection end condition is met.

Alternatively, the controller 130 in the vision testing apparatus 100 provided in the foregoing embodiment determines whether the image information of another limb matches the i-1 th visually recognizable information, and the controller 130 determines whether the number of mismatches reaches a set number of times, and if the set number of times is reached, the controller 130 also determines that the test end condition is satisfied.

In this embodiment, by determining whether the number of mismatches reaches the set number as a determination that the detection end condition is satisfied, it is possible to avoid excessive periodic detection, which is beneficial to saving resources of an execution device (e.g., the vision detecting device 100 provided in the foregoing embodiment).

Optionally, the unmatched setting times can be freely set according to needs, and the setting times can be automatically adjusted according to habits of users by using a machine learning technology.

In some possible embodiments, in the step S204, the i-1 th-level visually recognizable information is generated and displayed, and the detection of the next cycle is performed until the detection end condition is satisfied, as shown in fig. 4, including but not limited to the steps S401 to S403:

s401: visually recognizable information of level i-1 is generated and presented.

Alternatively, the controller 130 in the vision inspection device 100 provided by the foregoing embodiment generates visually recognizable information of the i-1 st level and controls the display 110 to present the visually recognizable information of the i-1 st level. For example, the ith-level visually recognizable information is "E", and the i-1 th-level visually recognizable information may be "E" or "" which is one size smaller than the ith-level visually recognizable information.

S402: and acquiring further limb image information of the user aiming at the i-1 level visual identifiable information.

Alternatively, the camera 120 in the vision inspection apparatus 100 provided in the foregoing embodiment acquires further limb image information of the user with respect to the i-1 st visually recognizable information and sends the further limb image information to the controller 130.

S403: if the fact that the image information of the other limb is not matched with the i-1 level visual identifiable information is determined, whether the evaluation goodness of the i-1 level visual identifiable information reaches the best design evaluation goodness is determined; and if the worst design evaluation goodness is reached, determining that the detection ending condition is met.

Alternatively, the controller 130 in the vision inspection apparatus 100 provided in the foregoing embodiment determines whether further limb image information matches the i-1 st visually recognizable information, and the controller 130 determines whether the evaluation superiority of the i-1 st visually recognizable information reaches the worst design evaluation superiority, and if the worst design evaluation superiority is reached, the controller 130 also determines that the inspection end condition is satisfied.

In the embodiment, by determining whether the evaluation goodness of the i-1 th-level visually recognizable information reaches the worst design evaluation goodness as a determination that the detection end condition is met, a closed loop of machine detection can still be realized when the limit of the database is reached in the vision detection process, and downtime can be avoided.

In some possible embodiments, in the step S102, the i +1 th-level visually recognizable information is generated and displayed, and the detection of the next cycle is performed until the detection end condition is satisfied as shown in fig. 5, including but not limited to the steps S501 to S503:

s501: and generating and displaying i +1 th-level visually recognizable information.

Alternatively, the i +1 th-level visually recognizable information is generated by the controller 130 in the vision detecting device 100 provided by the foregoing embodiment, and the display 110 is controlled to exhibit the i +1 th-level recognizable information. For example, the ith-level visually recognizable information is "E", and the i +1 th-level visually recognizable information may be "E" or "" which is one size larger than the ith-level visually recognizable information.

S502: and acquiring further limb image information of the user aiming at the i +1 th-level visually recognizable information.

Alternatively, the camera 120 in the vision inspection apparatus 100 provided in the foregoing embodiment acquires further limb image information of the user with respect to the i +1 th-level visually recognizable information, and transmits the further limb image information to the controller 130.

S503: if the fact that the second limb image information is matched with the (i + 1) th-level visual identifiable information is determined, whether the evaluation quality of the (i + 1) th-level visual identifiable information reaches the optimal design evaluation quality is determined; and if the optimal design evaluation quality is achieved, determining that the detection end condition is met.

Alternatively, the controller 130 in the vision inspection apparatus 100 provided in the foregoing embodiment determines whether the further limb image information matches the i +1 th-level visually recognizable information, and the controller 130 determines whether the evaluation merit of the i +1 th-level visually recognizable information reaches the optimum design evaluation merit, and if the optimum design evaluation merit is reached, the controller 130 also determines that the inspection end condition is satisfied.

In this embodiment, by determining whether the evaluation goodness of the i +1 th-level visually recognizable information reaches the worst design evaluation goodness as a determination that the detection end condition is satisfied, a closed loop of machine detection can still be realized when the limit of the database is reached in the vision detection process, and downtime can be avoided.

Based on the same inventive concept, the embodiment of the application provides another vision detection method, which comprises the following steps: and after the ith-level visual identifiable information and answer information for the user to select are generated and displayed, periodic detection is carried out.

Alternatively, the controller 130 in the vision inspection apparatus 100 provided by the foregoing embodiment generates the i-th level visually recognizable information and the answer information for the user to select, and controls the display 110 to present the i-th level visually recognizable information and the answer information for the user to select.

As shown in fig. 6, the detection process of one cycle includes, but is not limited to, steps S601-S603:

s601: and acquiring the limb image information of the user aiming at the ith-level visual identifiable information, and confirming that the limb image information which can be mapped to the answer information and meets the set time is effective limb image information.

Alternatively, the body image information of the user for the i-th visually recognizable information is acquired by the camera 120 in the vision detecting apparatus 100 provided in the foregoing embodiment, and is sent to the controller 130.

S602: determining whether the limb image information matches the ith-level visually identifiable information; if the visual identification information is matched with the visual identification information, generating and displaying i + 1-th-level visual identification information, and detecting in the next period until a detection end condition is met; and if not, generating and displaying the ith-level visual identification information and/or the (i-1) th-level visual identification information of the other direction, and carrying out detection of a subsequent cycle until a detection end condition is met.

Alternatively, the controller 130 in the vision inspection apparatus 100 provided by the foregoing embodiment determines whether the limb image information matches the i-th-level visually recognizable information.

S603: if the detection end condition is met, generating and displaying detection result information corresponding to the current-level visual identifiable information; and i is a positive integer, and the evaluation goodness of the i-1 st level visual recognizable information, the i-th level visual recognizable information and the i +1 st level visual recognizable information is changed from poor to good.

Alternatively, the controller 130 in the vision detecting device 100 provided by the foregoing embodiment generates the detection result information corresponding to the current-stage visually recognizable information and controls the display 110 to present the detection result information.

In another vision testing method provided in this embodiment, visually recognizable information and answer information for a user to select are displayed to the user, and after limb image information of the user for the visually recognizable information is obtained, it is first determined whether the limb image information is valid limb image information, that is, it is determined whether the limb image information can be mapped to the answer information and whether the mapped answer information meets a set time, so as to be used as a determination basis for determining whether the limb image information is valid limb image information. This is beneficial for providing a more diverse detection experience for the user.

For example, the user performs a corresponding body motion with respect to the visually recognizable information displayed by the display 110, the camera 120 captures the body motion and converts the body motion into body image information, the controller 130 receives the body image information and then controls the display 110 to display a corresponding cursor, and the user moves the body according to the position of the cursor displayed by the display 110 (the position of the cursor relative to the answer information) so that the cursor enters the selection frame of the answer information and keeps the set time, at this time, the controller 130 can determine whether the body image information is valid body image information, and continue the subsequent determination.

After confirming whether the limb image information is effective limb image information, outputting vision detection result information according to the limb image information and the visual identifiable information and the analysis rule provided by the application, thereby being beneficial to realizing a machine detection mode of vision detection. The machine detection can replace the current manual vision detection, so that the cost of manual detection can be effectively reduced; the machine detection is beneficial to being realized by family electronic equipment or even personal portable electronic equipment, and has extremely low requirement on detection places, thereby effectively overcoming the problem of detection region limitation existing in manual detection, and the machine detection has high flexibility and strong interest, and the detection experience obtained by a user is better.

Alternatively, the answer information may be a combination of information including a correct answer and at least one incorrect answer. For example, the visually recognizable information currently presented is "E" with the right opening, the answer information may include a correct answer "→" and a wrong answer "→", and of course, the wrong answer "←" and/or the wrong answer "↓" may be added to the answer information as necessary.

Alternatively, the answer information may be presented simultaneously with the visually identifiable information.

Optionally, the visually identifiable information is displayed first, and then the answer information is displayed. For example, the display 110 first presents the visually identifiable information for a period of time (e.g., 10 seconds), and then the display 110 presents only the answer information until the user selection is complete; alternatively, the display 110 may first display the visually recognizable information for a period of time, and then the display 110 may simultaneously display the visually recognizable information and the answer information until the user selection is completed.

Optionally, the detection ending condition includes that the number of times that the limb image information and the visually recognizable information are not matched reaches a set number of times, or the evaluation quality of the current-level visually recognizable information reaches the optimal design evaluation quality.

Based on the same inventive concept, the embodiment of the present application provides a vision testing apparatus 200, a schematic frame diagram of which is shown in fig. 7, including but not limited to: a visually recognizable information display module 210, a limb image information acquisition module 220 and an information processing module 230.

The visually identifiable information presentation module 210 is configured to: generating and displaying i-th-level visually recognizable information; if the limb image information of the user aiming at the ith-level visual identifiable information is matched with the ith-level visual identifiable information, generating and displaying the (i + 1) th-level visual identifiable information; if not, generating and displaying the ith-level visual identification information and/or the (i-1) th-level visual identification information of the other direction; the i is a positive integer, and the evaluation goodness of the i-1 th-level visual recognizable information, the i-th-level visual recognizable information and the i +1 th-level visual recognizable information is changed from poor to good; and if the detection end condition is met, generating and displaying detection result information corresponding to the current-stage visual identifiable information.

The body image information obtaining module 220 is configured to: and acquiring the limb image information of the user aiming at the ith-level visual identifiable information.

The information processing module 230 is configured to: and determining whether the limb image information is matched with the i-th-level visually recognizable information or not until a detection ending condition is met.

The vision testing apparatus 200 provided in this embodiment is used to implement various optional embodiments of the vision testing method. And will not be described in detail herein.

In some possible embodiments, the information processing module 230 is further configured to: and confirming whether the other limb image information is matched with the ith-level visually recognizable information of the other direction or not until the detection ending condition is met.

The visually identifiable information presentation module 210 is further operable to: if the image information of the other limb is matched with the ith-level visual identifiable information in the other direction, generating and displaying the (i + 1) -level visual identifiable information, and detecting in the next period until the detection ending condition is met; and if the limb image information does not match with the visual identification information in the other direction, generating and displaying the i-1 level visual identification information.

In some possible embodiments, the limb image information acquiring module 220 is configured to: and acquiring further limb image information of the user aiming at the i-1 level visual identifiable information.

The information processing module 230 is further configured to: if the image information of the other limb is determined not to be matched with the i-1 level visual identifiable information, determining whether the unmatched times reach the set times or not; and if the set times are reached, determining that the detection end condition is met.

In some possible embodiments, the limb image information acquiring module 220 is configured to: and acquiring further limb image information of the user aiming at the i-1 level visual identifiable information.

The information processing module 230 is further configured to: if the fact that the image information of the other limb is not matched with the i-1 level visual identifiable information is determined, whether the evaluation goodness of the i-1 level visual identifiable information reaches the best design evaluation goodness is determined; and if the worst design evaluation goodness is reached, determining that the detection ending condition is met.

In some possible embodiments, the limb image information acquiring module 220 is configured to: and acquiring further limb image information of the user aiming at the i +1 th-level visually recognizable information.

The information processing module 230 is further configured to: if the fact that the second limb image information is matched with the (i + 1) th-level visual identifiable information is determined, whether the evaluation quality of the (i + 1) th-level visual identifiable information reaches the optimal design evaluation quality is determined; if the optimal design evaluation quality is reached, determining that the detection end condition is met

In some possible implementations, the visually identifiable information presentation module 210 is to: and generating and displaying the ith-level visually recognizable information and answer information for the user to select.

The information processing module 230 is further configured to: and confirming that the body image information which can be mapped to the answer information and meets the set time is effective body image information.

Based on the same inventive concept, embodiments of the present application provide a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements any one of the vision detection methods provided in the foregoing embodiments.

The computer-readable storage medium provided by the embodiment of the application is suitable for various optional implementations of the vision detection method. And will not be described in detail herein.

Those skilled in the art will appreciate that the computer-readable storage media provided by the embodiments can be any available media that can be accessed by the electronic device and includes both volatile and nonvolatile media, removable and non-removable media. The computer-readable storage medium includes, but is not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magnetic-optical disks, ROMs, RAMs, EPROMs (Erasable Programmable Read-Only memories), EEPROMs (Electrically Erasable Programmable Read-Only memories), flash memories, magnetic cards, or optical cards. That is, a computer-readable storage medium includes any medium that stores or transmits information in a form readable by a device (e.g., a computer).

By applying the embodiment of the application, at least the following beneficial effects can be realized:

1. based on the vision detection method provided by the embodiment of the application, the visual identification information is displayed for the user, the limb image information of the user aiming at the visual identification information is obtained, and the vision detection result information is output according to the limb image information and the visual identification information and the analysis rule provided by the application, so that the machine detection mode of vision detection is favorably realized. The machine detection can replace the current manual vision detection, so that the cost of manual detection can be effectively reduced; the machine detection is beneficial to being realized by family electronic equipment or even personal portable electronic equipment, and has extremely low requirement on detection places, thereby effectively overcoming the problem of detection region limitation existing in manual detection, and the machine detection has high flexibility and strong interest, and the detection experience obtained by a user is better.

2. Based on the vision detection method provided by the embodiment of the application, after the fact that the limb image information is not matched with the ith-level visual recognizable information is determined, the ith-level visual recognizable information in the other direction with the same evaluation goodness and badness is provided at first instead of immediately reducing the evaluation goodness and badness level of the visual recognizable information, so that the method is beneficial to providing a selection opportunity for the user again, and can effectively reduce negative effects caused by invalid detection judgment due to misoperation of the user or failure of the vision detection equipment 100 in obtaining the limb image information and the like. And the times of unnecessarily reducing the evaluation quality level of the visual recognizable information are reduced, the vision detection period can be shortened, and the vision detection efficiency is improved.

3. Based on the vision detection method provided by the embodiment of the application, by determining whether the unmatched times reach the set times, as the determination that the detection end condition is met, excessive periodic detection can be avoided, and the resource of execution equipment is saved.

4. Based on the vision detection method provided by the embodiment of the application, whether the evaluation quality of the i-1 th-level visual recognizable information reaches the worst design evaluation quality is determined to be satisfied as the detection finishing condition, closed loop of machine detection can still be realized when the vision detection reaches the limit of a database, and downtime can be avoided.

5. Based on the vision detection method provided by the embodiment of the application, whether the evaluation quality of the i + 1-th-level visual recognizable information reaches the worst design evaluation quality is determined to be satisfied as a determination that the detection end condition is satisfied, a closed loop of machine detection can still be realized when the vision detection process reaches the limit of a database, and downtime can be avoided.

6. Based on the vision detection method provided by the embodiment of the application, the visually identifiable information and the answer information for the user to select are displayed to the user, after the limb image information of the user aiming at the visually identifiable information is obtained, whether the limb image information is valid limb image information is firstly confirmed, namely, whether the limb image information can be mapped to the answer information is determined, whether the mapped answer information meets the set time is determined, and the mapped answer information is used as a judgment basis for confirming whether the limb image information is valid limb image information. This is beneficial for providing a more diverse detection experience for the user.

7. Based on the vision testing device 100 provided in the embodiment of the application, the display 110 can be used to generate and display visually recognizable information to the user, as well as display testing result information. The camera 120 may be used to obtain the body image information of the user for visually recognizable information. The controller 130 may be configured to control the display 110 and the camera 120 to perform the aforementioned actions, and may output the vision test result information according to the limb image information and the visually recognizable information and according to the analysis rule in the vision test method provided in the present application.

Those of skill in the art will appreciate that the various operations, methods, steps in the processes, acts, or solutions discussed in this application can be interchanged, modified, combined, or eliminated. Further, other steps, measures, or schemes in various operations, methods, or flows that have been discussed in this application can be alternated, altered, rearranged, broken down, combined, or deleted. Further, steps, measures, schemes in the prior art having various operations, methods, procedures disclosed in the present application may also be alternated, modified, rearranged, decomposed, combined, or deleted.

In the description of the present application, it is to be understood that the terms "center", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience in describing the present application and simplifying the description, but do not indicate or imply that the referred device or element must have a particular orientation, be constructed in a particular orientation, and be operated, and thus should not be construed as limiting the present application.

The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless otherwise specified.

In the description of the present application, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art.

In the description herein, particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.

It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.

The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种医疗眼科用视力测试设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!