Production line detection method

文档序号:154448 发布日期:2021-10-26 浏览:25次 中文

阅读说明:本技术 生产线检测方法 (Production line detection method ) 是由 张加宇 文娟 王利华 许艳春 魏春龙 于 2021-09-18 设计创作,主要内容包括:本发明提供生产线检测方法,属于计算机技术领域,利于计算图形处理方法对生产线上的包装袋内的氧指示剂时行拍摄图片后,经过查找计算找到氧指示剂图案,并经过色度运算,获得该氧指示剂的色度,并显示于检测界面上供操作人员查看。并且本发明还提供了一种专用的检测界面显示与操作方法。通过本发明,可以有效降低人员疲劳,提高生产效率与生产质量。(The invention provides a production line detection method, belongs to the technical field of computers, and is beneficial to calculating a picture shot by a graphic processing method during the process of carrying out image shooting on an oxygen indicator in a packaging bag on a production line, finding out an oxygen indicator pattern through searching and calculating, obtaining the chromaticity of the oxygen indicator through chromaticity operation, and displaying the chromaticity on a detection interface for an operator to check. The invention also provides a special detection interface display and operation method. The invention can effectively reduce the fatigue of personnel and improve the production efficiency and the production quality.)

1. The production line detection method comprises the following steps: acquiring a picture of an article on a production line, processing the picture, and displaying and operating the processed picture; the method is characterized in that:

the acquiring and processing the picture specifically comprises the following steps: in a time window, a plurality of color pictures are obtained from at least two different angles aiming at the same article with the externally visible circular marker; and processing the acquired pictures by a computer processor according to the following steps in sequence:

s1, finding out color blocks with elliptical characteristics in all pictures according to the following conditions:

(i) the major axis of the ellipse is approximately equal to the standard preset diameter;

(ii) the color patch edge is characterized by at least 70% elliptical edge;

s2, calculating the area values of all color blocks:

s21, when two or more color blocks exist, each color block is processed as follows:

(S211) carrying out binarization processing on the color block, and obtaining a closed edge of a black pixel area; and a black pixel region area value;

(S212) comparing area values of each black pixel region; taking at least one color block with the maximum area value;

s22, when only one color block is provided, the color block is determined as having the maximum area value;

s3, finding out color blocks with small ellipticity from the color blocks obtained in the step S2;

s4, carrying out chroma operation on the color blocks obtained in the step S3, which specifically comprises the following steps: taking all pixel points which are in the closed edge of the black pixel area and have symmetrical pixel points with the number not being 0 when the long axis of the ellipse is taken as a symmetrical axis, and solving the average value of the chroma of all the pixel points; the average value is determined as the chromaticity of the oval marker;

the display and operation specifically include: and displaying and viewing the article picture and the obtained chromaticity.

2. The production line inspection method as set forth in claim 1, further comprising the steps of:

printing an independent first label on each acquired picture, wherein the first label at least comprises the serial number of the belonged article and the serial number of the picture, and printing a second label corresponding to the first label on each color block with the elliptic characteristic when the step S1 is executed; and the first label and the second label are used for corresponding and recording the color block and the picture of the taken color block, and corresponding and recording the picture and the article for obtaining the picture.

3. The production line detection method of claim 1, wherein the method for finding the color block with small ellipticity comprises the following steps:

s31, finding out color blocks with small ellipticity by calculating the lengths of the major axis and the minor axis of the ellipse and the ratio of the lengths to the minor axis;

s32, when there is only one color block with the largest area value, the color block is determined as the color block with small ellipticity.

4. The production line inspection method of claim 1, wherein said one time window is: the time from the beginning when the item first appears within the visible range of all of the picture taking means to the end when the item leaves the visible range of any one of the picture taking means.

5. The production line inspection method as set forth in claim 1,

the different angles are: when the picture acquisition device acquires one picture, the included angle and the orientation between the picture acquisition device and the article are different from those when the other pictures are acquired; the included angle is as follows: an included angle is formed between a virtual connecting line between the position of the center of the article and the image acquisition device and the horizontal plane; the orientation is the orientation of the picture acquisition device in the article;

the diameter is approximately equal to the standard preset diameter, specifically, the difference between the obtained ellipse long axis length and the preset long axis length is within 6%.

6. The production line inspection method as set forth in claim 1,

the definition of a circle is included in the definition of an ellipse, and when the ellipse in the found picture is a circle, the arbitrary diameter of the circle is taken as the major axis of the ellipse, and the minor axis of the ellipse is equal to the major axis.

7. The production line inspection method of claim 1, wherein the display and viewing operation is specifically:

the display and viewing operations are implemented by a display device, a command input device, coupled to the processor;

the display device at least has a detection interface coupled with the processor in the displayed content; the detection interface comprises:

(i) quick view mode: displaying the acquired picture of the article on a detection interface; the picture is obtained by carrying out chromaticity operation on the marker contained in the picture and obtaining the chromaticity value of the marker;

when a plurality of articles are provided, pictures of the plurality of articles are sequentially displayed, and the adjacent area of the picture of each article is annotated with the chromaticity information of the marker;

the pictures of the plurality of articles and the chromaticity information of each marker jointly occupy more than 60% of the display area of the detection interface;

(ii) selecting a viewing mode; one of all pictures displayed in the quick viewing mode is statically displayed on the detection interface and occupies at least more than 70% of the area of the detection interface; the other areas of the detection interface display the operable icons and the related information of the marker colorimetric values of the pictures;

(iii) operation call: in the quick viewing mode, one of all pictures is clicked through a command input device, and the mode is converted into a selective viewing mode; under the selective viewing mode, the image saturation adjusting function and the image brightness adjusting function can be at least called, and the brightness and the saturation of the image and the marker in the image are adjusted to enable the color of the marker to be visually seen.

8. The production line inspection method according to claim 7, wherein in the fast display mode, at least one picture is provided for each of different articles to be displayed in the display area for a certain time, and the display method is one of the following methods:

(i) scrolling from the detection interface in a first directional sequence;

(ii) arranged in batches and distributed across the detection interface.

9. The production line detection method according to claim 7, wherein the calling of the picture saturation adjustment function and the picture brightness adjustment function specifically comprises: when the command input device is a touch screen, the gesture input of the touch screen is called as follows:

directionally touching in the detection interface along a first direction, and calling one of the functions of adjusting the saturation of the picture and adjusting the brightness of the picture; directionally touching in the detection interface along a second direction, and calling the other functions except the function called in the first direction in the picture saturation adjusting function and the picture brightness adjusting function;

the first direction comprises a reciprocating direction in the direction; the second direction is a reciprocating direction in the direction;

the first direction is as follows: the touch gesture passes through one of the pixel rows and the pixel columns from one direction in the detection interface; the second direction is another process that the touch gesture passes through one of the pixel rows and the pixel columns except the pixel rows and the pixel columns occupied by the first direction in succession from one direction in the detection interface;

when the touch gesture moves from one direction and continuously passes through the pixel rows and the pixel columns at the same time, the touch gesture is determined to simultaneously satisfy the first direction and the second direction, and the picture saturation adjusting function and the picture brightness adjusting function are simultaneously called.

10. The production line detection method according to claim 7, wherein the calling of the picture saturation adjustment function and the picture brightness adjustment function specifically comprises: when the command input device is used for inputting a mouse, the mouse input is called as follows:

the mouse slides in the detection interface along the first direction in an oriented mode, and the function of one of the picture saturation adjusting function and the picture brightness adjusting function is called; sliding directionally in the detection interface along a second direction, and calling the other functions except the function called by the first direction in the picture saturation adjusting function and the picture brightness adjusting function;

the first direction comprises a reciprocating direction in the direction; the second direction is a reciprocating direction in the direction;

the first direction is as follows: the mouse passes through one of the pixel row and the pixel column from one direction; the second direction is another process that the mouse passes through one of the pixel rows and the pixel columns except the pixel rows and the pixel columns occupied by the first direction from one direction;

when the mouse moves from one direction and continuously passes through the pixel rows and the pixel columns at the same time, the mouse is determined to simultaneously satisfy the first direction and the second direction, and the picture saturation adjusting function and the picture brightness adjusting function are simultaneously called.

Technical Field

The invention relates to a method for processing, displaying and operating production line pictures by using a computer.

Background

In a production line, some scenes need to identify the color of an article marker on the production line. For example:

the infusion bag is filled with infusion preparation which is directly infused in the blood of a human body, so that the safety and the efficacy of the medicine are very important, and the medicine is a hard index of the production quality. Some drugs require oxygen barrier during packaging and are therefore double-layer packages: the inner layer packaging transfusion preparation is also provided with an outer layer packaging, an interlayer is arranged between the inner layer packaging and the outer layer packaging, oxygen-blocking gas is filled or not filled in the interlayer, and an oxygen content indicator is also arranged in the interlayer. The oxygen content indicator is a small disc which has blue, purple and red changes along with the oxygen content of the contact environment. For example: the small wafer with the oxygen concentration less than 0.1 percent in contact with the environment is red; the small wafer with the oxygen concentration of 0.1 to 1 percent is purple; the small discs with oxygen concentration greater than 1% are blue.

In the production process of the existing infusion preparation, small wafers (oxygen content indicators) in a packaging bag need to be visually identified one by one so as to eliminate unqualified substances with overproof interlayer oxygen content. The process of visual identification is not beneficial to improving the production efficiency, ensuring the production quality and saving the labor cost.

Disclosure of Invention

In order to solve the technical problem, the invention provides a production line detection method, which automatically identifies the chromaticity (blue/purple/red) of an oxygen content indicator after automatically acquiring pictures of infusion bags passing through a conveyor belt so as to determine the oxygen content index of an interlayer.

The production line detection method comprises the following steps: and acquiring pictures of the articles on the production line, processing the pictures, and displaying and operating the processed pictures. Wherein:

the acquiring and processing the picture specifically comprises the following steps: in a time window, a plurality of color pictures are obtained from at least two different angles aiming at the same article with the externally visible circular marker; and processing the acquired pictures by a computer processor according to the following steps in sequence:

s1, finding out color blocks with elliptical characteristics in all pictures according to the following conditions:

(i) the major axis of the ellipse is approximately equal to the standard predetermined diameter.

(ii) The color patch edge is characterized by at least 70% elliptical edge.

S2, calculating the area values of all color blocks:

s21, when two or more color blocks exist, each color block is processed as follows:

(S211) carrying out binarization processing on the color block, and obtaining a closed edge of a black pixel area; and a black pixel area value.

(S212) comparing area values of each black pixel region; and taking at least one color block with the maximum area value.

S22, when only one color block is present, the one color block is identified as having the largest area value.

S3, finding out color blocks with small ellipticity from the color blocks obtained in the step S2.

S4, carrying out chroma operation on the color blocks obtained in the step S3, which specifically comprises the following steps: taking all pixel points which are in the closed edge of the black pixel area and have symmetrical pixel points with the number not being 0 when the long axis of the ellipse is taken as a symmetrical axis, and solving the average value of the chroma of all the pixel points; the mean value is taken as the shade of the oval marker.

The display and operation specifically include: and displaying and viewing the article picture and the obtained chromaticity.

The above-mentioned production line inspection method further includes the steps of:

printing an independent first label on each acquired picture, wherein the first label at least comprises the serial number of the belonged article and the serial number of the picture, and printing a second label corresponding to the first label on each color block with the elliptic characteristic when the step S1 is executed; and the first label and the second label are used for corresponding and recording the color block and the picture of the taken color block, and corresponding and recording the picture and the article for obtaining the picture.

The above method for detecting a production line further includes the steps of:

and S31, finding out color blocks with small ellipticity by calculating the lengths of the major axis and the minor axis of the ellipse and the ratio of the major axis and the minor axis.

S32, when there is only one color block with the largest area value, the color block is determined as the color block with small ellipticity.

In the above production line inspection method, the term "within one time window" means: the time from the beginning when the item first appears within the visible range of all of the picture taking means to the end when the item leaves the visible range of any one of the picture taking means.

In the above-mentioned production line inspection method, the different angles refer to: when the picture acquisition device acquires one picture, the included angle and the orientation between the picture acquisition device and the article are different from those when the other pictures are acquired; the included angle is as follows: an included angle is formed between a virtual connecting line between the position of the center of the article and the image acquisition device and the horizontal plane; the orientation is the orientation of the image capture device on the article.

The diameter is approximately equal to the standard preset diameter, specifically, the difference between the obtained ellipse long axis length and the preset long axis length is within 6%.

In the above-mentioned production line inspection method, the definition of the circle is included in the definition of the ellipse, and when the ellipse in the found picture is a circle, the arbitrary diameter of the circle is taken as the major axis of the ellipse, and the minor axis of the ellipse is equal to the major axis.

The above method for detecting a production line is further explained in that the displaying and viewing operation specifically includes:

the display and viewing operations are implemented by a display device, a command input device, coupled to the processor.

The display device at least has a detection interface coupled with the processor in the displayed content; the detection interface comprises:

(i) quick view mode: displaying the acquired picture of the article on a detection interface; the picture is obtained by carrying out chromaticity operation on the marker contained in the picture and obtaining the chromaticity value of the marker.

When a plurality of articles are provided, the images of the plurality of articles are sequentially displayed, and the adjacent area of the image of each article is annotated with the chromaticity information of the marker.

The images of the plurality of items and the colorimetric information of each of the markers collectively occupy more than 60% of a display area of the detection interface.

(ii) Selecting a viewing mode; one of all pictures displayed in the quick viewing mode is statically displayed on the detection interface and occupies at least more than 70% of the area of the detection interface; and displaying the operable icon and the related information of the marker colorimetric value of the picture in the rest area of the detection interface.

(iii) Operation call: in the quick viewing mode, one of all pictures is clicked through a command input device, and the mode is converted into a selective viewing mode; under the selective viewing mode, the image saturation adjusting function and the image brightness adjusting function can be at least called, and the brightness and the saturation of the image and the marker in the image are adjusted to enable the color of the marker to be visually seen.

In the above-mentioned production line inspection method, in the fast display mode, at least one picture is provided for each of different articles to be displayed in the display area for a certain time, and the display method is one of the following methods:

(i) scroll sequentially from the detection interface in a first direction.

(ii) Arranged in batches and distributed across the detection interface.

The above production line detection method further includes that the invoking of the picture saturation adjusting function and the picture brightness adjusting function specifically includes: when the command input device is a touch screen, the gesture input of the touch screen is called as follows:

directionally touching in the detection interface along a first direction, and calling one of the functions of adjusting the saturation of the picture and adjusting the brightness of the picture; and directionally touching in the second direction in the detection interface, and calling the rest functions except the function called by the first direction in the picture saturation adjusting function and the picture brightness adjusting function.

The first direction comprises a reciprocating direction in the direction; the second direction is a reciprocating direction in the direction.

The first direction is as follows: the touch gesture passes through one of the pixel rows and the pixel columns from one direction in the detection interface; the second direction is another process that the touch gesture passes through one of the pixel rows and the pixel columns occupied by the first direction from one direction in the detection interface.

When the touch gesture moves from one direction and continuously passes through the pixel rows and the pixel columns at the same time, the touch gesture is determined to simultaneously satisfy the first direction and the second direction, and the picture saturation adjusting function and the picture brightness adjusting function are simultaneously called.

The above production line detection method further includes that the invoking of the picture saturation adjusting function and the picture brightness adjusting function specifically includes: when the command input device is used for inputting a mouse, the mouse input is called as follows:

the mouse slides in the detection interface along the first direction in an oriented mode, and the function of one of the picture saturation adjusting function and the picture brightness adjusting function is called; and directionally sliding in the second direction in the detection interface, and calling the other functions except the function called by the first direction in the picture saturation adjusting function and the picture brightness adjusting function.

The first direction comprises a reciprocating direction in the direction; the second direction is a reciprocating direction in the direction.

The first direction is as follows: the mouse passes through one of the pixel row and the pixel column from one direction; the second direction is another process that the mouse passes through one of the pixel rows and the pixel columns except the pixel rows and the pixel columns occupied by the first direction from one direction;

when the mouse moves from one direction and continuously passes through the pixel rows and the pixel columns at the same time, the mouse is determined to simultaneously satisfy the first direction and the second direction, and the picture saturation adjusting function and the picture brightness adjusting function are simultaneously called.

Has the advantages that:

the invention automatically identifies the oxygen content indicator chromaticity (blue/purple/red) of the infusion bag passing on the conveyor belt to determine the oxygen content index of the interlayer. The working efficiency is improved, the fatigue caused by manual identification is reduced, and the production quality problems caused by personnel fatigue and poor responsibility consciousness are basically eliminated.

The specific picture condition of a certain packaging bag can be very conveniently called and checked, and the method is interactive and friendly.

Drawings

FIG. 1 is a schematic view of a camera arrangement distribution;

FIG. 2 is a schematic view of the camera shooting angle of the oxygen indicator;

FIG. 3 is a schematic block diagram;

FIG. 4 is a schematic illustration of a detection interface in a system window;

FIG. 5 is a schematic view of a test interface;

FIG. 6 is a schematic diagram of a touch gesture in a pixel row and a pixel column.

Packaging bag c 1; oxygen indicator c 2; a camera c 3; virtual connecting line c 4; conveyor c 5; a spectroscope c 6; light source c 7; a light-blocking cover c 8; 200 of color blocks; major axis of ellipse 201; a virtual edge 202; a missing portion 203; missing the symmetric portion 204; a first region 205; a middle region 206; a second region 207; system window j 1; detection interface j 2; adjacent region of the picture j 3; scroll area j 4; the related information area j 5; large map viewing area j 6; a first direction d 1; a second direction d 2; pixel row x 1; pixel column x 2; touch gesture s 1.

Detailed Description

In the description of the present embodiments, the terms "upper," "lower," "left," "right," "inner," "outer," and the like are used in the indicated orientations or positional relationships for the purpose of describing the present invention and for the purpose of facilitating understanding by a skilled artisan, and do not indicate or imply that the referenced devices or components must have a particular orientation, be constructed and operated in a particular orientation, and therefore, are not to be considered as limiting the present invention.

In the process of the present embodiment, when the work steps are described, the terms "front" and "back" are used to refer to the sequence of the work step operations, that is, one step must be based on the completion of another step. Of course, these steps are performed by the computer according to a preset program.

In the process of this embodiment, the chroma is used, which reflects the hue of the color, and sometimes includes the saturation.

The hardware part of the invention includes at least:

1. the picture taking device, which is a camera c3 (camera), is installed in the production line, preferably above the production line conveyor c5, in alignment with the conveyor c 5.

2. And the computer can acquire pictures from the camera through data exchange with the camera.

The computer has at least a processor, a display device: a screen for display; an input device: the touch screen is used for inputting, the touch screen at least has a multi-point touch control function, or the mouse is used for inputting, and the mouse can be used for clicking and can also be used for sliding. Of course, it is preferable to have a server to support the remote detection process.

To implement the present invention, it is preferably implemented by a computer program for instructing associated hardware, wherein the computer program can be stored in a computer readable storage medium, and the computer program can realize the steps of the above-mentioned method embodiments when being executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.

The article in the example is a bagged infusion preparation, and the bagged infusion preparation is packaged in a packaging bag. The circular marker in the example is an oxygen indicator for indicating the amount of oxygen contained in the package.

Example one:

and a picture acquisition device.

Mainly include camera c3, it includes: and a picture sensor including a plurality of light receiving elements, the picture sensor generating a sensor picture by photoelectrically converting received light by the light receiving elements. And a modulation unit that modulates the intensity of light received by the image sensor using an imaging pattern provided at a predetermined distance from a light-receiving surface of the image sensor, wherein the width of an opening of the imaging pattern provided in the modulation unit is determined based on the predetermined distance and the size of the image sensor in the two-dimensional direction.

The camera c3 obtains a digitized picture with color information (the color format may be RGB mode or CMYK mode).

The camera may be plural, arranged in the following manner:

referring to fig. 1, in the present embodiment, three cameras are arranged in an arrangement manner including:

the package is a transparent plastic package so that the circular marker 2, which is the oxygen content indicator (oxygen indicator c 2), is visible from the outside. The packaging bags c1 are placed on the conveyor belt c5 to move from one direction to the other direction, the packaging bags c1 which are one after another respectively pass below the cameras, the cameras c3 acquire pictures of the packaging bags which pass below the conveyor belt, the conveyor belt is a plastic light-transmitting conveyor belt, preferably a translucent plastic conveyor belt, a light splitting plate c6 is arranged below the conveyor belt, and a light source c7 is arranged below the light splitting plate c6, so that a better light source is provided for the cameras c 3. Of course, a pure color (white or black) conveyor belt may be used, and the light source c7 may supplement light from above to illuminate the packaging bag. The light blocking cover c8 is arranged at the installation position of the camera c3, so that a better light source environment is provided for the camera c3, and interference of other light rays is avoided.

Certainly, a defective product removing system matched with the device can be arranged at the rear part of the device, and the packaging bags which do not accord with the chromaticity of the oxygen content indicator and do not accord with the qualified requirement of the product are removed depending on the identification result of the oxygen content indicator of the packaging bags provided by the invention.

Arranging a camera: the invention arranges three cameras with different angles, and pictures can be obtained simultaneously when the packaging bag c1 passes below the camera. The three cameras are arranged at different angles, so that the problems that the camera shooting is not clear and the angle is too small when the circular marker is in an unpredictable position of the packaging bag can be solved. The specific analysis is as follows:

1. the wrapping bag is soft transparent plastic packing, and the surface unpredictably becomes various complicated curved surfaces, causes different procedures, and the reflection of light of different styles is unfriendly, obtains the circular marker that is in packing inside to the camera, if make a video recording from a direction, probably can't just avoid the reflection of light that the curved surface arouses, consequently need make a video recording from another direction, just can avoid the reflection of light just, and a plurality of angles are made a video recording and are also provided a plurality of conditions of avoiding the reflection of light. 2. The packaging bag is made of soft plastic, a natural arc surface (bread shape, protruding middle part, downward inclination of the front side and the rear side) is formed inside the packaging bag due to liquid pressure, specifically, an arc curve, when the circular marker unpredictably appears at any position of the middle part, the front part and the rear part, different angles such as right-up horizontal position, left inclination, right inclination and the like are generated, and at least one camera is arranged to face the circular marker better.

Camera c 3: for example, a camera (model MV-CA020-10 UC) produced by Haekwovir is compatible with a GigE Vision V2.0 protocol and a GenlCam standard and is connected with a processor arranged by the invention. The overall dimension is 29 mm x 30 mm, and miniaturized equipment is favorable to a plurality of cameras centralized position and installs above conveyer belt c 5. The exposure time is 1 mus minimum, and a sufficient number of pictures can be obtained in a very short time, i.e. the time the package c1 passes under the camera. Power supply and I/O through the 6-pin Hirose linker: 1-way optical coupling isolation input (Line 0), and USB3.0 input picture data to the processor. In line with the needs of the present invention.

Example two:

externally visible circular markers, in this case mainly oxygen indicator c2, are possible, of course, in different packaging applications, other indicators, such as for example ph indicators, etc.

The oxygen indicator c2 is used for indicating the oxygen concentration in a sealed container (or a packaging bag), and is generally used for detecting the packaging environment of electronic products, optical components and metal products. The indication method comprises the following steps: the oxygen concentration indicator is mainly used for indicating the oxygen concentration in a closed container, such as: when the oxygen concentration is more than 0.5%, the indicator is blue, when the oxygen concentration is 0.1%, the indicator is red, and when the oxygen concentration is between 0.10.5%, the indicator is blood cyan. The overall dimension of this example selection is 20mm diameter flakes, more does benefit to computer identification, and the market supply volume is big, and the purchase cost is also lower. The oxygen concentration is more than or equal to 1 percent of blue, the oxygen concentration is approximately equal to 0.5 percent of purple, and the oxygen concentration is less than or equal to 0.1 percent of red. In this example, the chromaticity is set as follows, and of course, the standard values of the chromaticity may be different depending on the application environment. Chroma:

the oxygen concentration is more than or equal to 1 percent of blue (RGB chroma: R120-135, G140-160, B225-245);

the oxygen concentration is approximately equal to 0.5 percent purple (RGB chroma: R205-230, G180-210, B225-240);

the oxygen concentration is less than or equal to 0.1 percent of red (RGB chroma: R230-260, G155-180, B205-225).

Example three:

and acquiring the picture.

In identifying the c2 color level, a picture is first obtained. The pictures obtained in the example are obtained for each packaging bag respectively, each camera takes pictures at least once after the packaging bags reach the positions of the cameras, and certainly if the computer has better computing power, some pictures can be taken more for identification, for example, each camera takes three pictures, the three cameras capture 9 pictures, and among the more pictures, the pictures meeting the requirements are more, and the identification is more accurate.

After each packaging bag reaches the designated position, the pictures are captured, so that the situation that other packaging bags are shot into the lens in the shooting process can be avoided. Therefore, a time window is established, and each bag c1 enters the time window period to start shooting. During this time window, bag c1 first appears under all of the cameras and is captured by all of the cameras until the window is over after bag c1 has been driven off any of the cameras by the conveyor. This is done to allow one bag to be photographed at a time, preventing the picture of the other bag (the front bag or the rear bag) from being mixed into the photograph of the bag. Of course, how to arrange the cameras can also avoid the patterns of other packaging bags from mixing with the shooting of the packaging bags.

Referring to fig. 1, the camera is at a different angle and orientation from the package when taking a photograph. The angle is that when the camera shot the picture, the contained angle and the position of camera and wrapping bag are different with the contained angle and the position of other cameras of this wrapping bag picture of shooing. Or the camera is the same camera, but the packaging bag is conveyed to different positions along with the conveyor belt c5, so that the included angle and the orientation are changed, and the included angle and the orientation are different. The definition of the included angle here is: and an included angle is formed between a virtual connecting line c4 between the position of the center of the packaging bag and the camera and the horizontal plane. And the orientation is the orientation of the picture taking device in the package.

In the shooting process, the conveyor belt c5 walks, the packaging bag moves, and in addition, the arrangement angles of the cameras are different, the pictures are shot for the first time and the pictures are shot for the second time in a window period, each picture shot for the two times has different angles and directions, so that the problems of light reflection and the relative angle of the pictures and the oxygen indicator c2 are better avoided, and the pictures which meet the requirements are more likely to be found.

For example, light source c7 is at position A, and when the bag is moved to point A1, the camera at point A2 is positioned at the light reflection concentration point, and the picture is reflected, but when the bag is moved to point B, the light reflection concentration point is not positioned at the camera at point A2, and the picture is more accurate.

When the package is placed on the conveyor, it is desirable to have the oxygen indicator c2 visible on the side facing upward. Of course, the oxygen indicators can be distributed on both sides of the packaging bag, so that the packaging bag arrangement work at the front end is not required to be increased (how to arrange the packaging bag is out of the range discussed in the embodiment of the invention).

The picture taken must be a color picture, although the pixels are higher and better where hardware is available.

Example four:

this example describes a method of finding a color block 200 with an elliptical feature in a picture.

The obtained picture not only has the oxygen indicator color block 200, but also has other marks on the package, such as a trademark pattern of a company, a printed pattern, patterns formed by other light interference, and the like, and the oxygen indicator pattern is required to be found out from the patterns.

At different angles, a perfect circle oxygen indicator may be an ellipse, and there is also an opportunity for perfect circles to appear, but there may be fewer opportunities, and therefore, an ellipse recognition method will be set, and the definition of perfect circles is included in the ellipse.

The method for searching the ellipse is as follows:

and utilizing Hough transformation and geometric characteristic identification of the ellipse.

Firstly, the major axis and the minor axis of the ellipse are separated by symmetry, then the Hough transformation is used for carrying out linear detection and identification to obtain the major axis and the minor axis, the center of the ellipse is determined by the intersection point of the major axis and the minor axis, and other parameters are further obtained. In addition, the key of the fast algorithm for ellipse identification based on the error theory is to construct a matrix and a polynomial according to the picture edge point coordinates, obtain a long axis and a short axis by solving the characteristic root of the polynomial, and then obtain the point location precision by utilizing the variance between the actual edge coordinates and the standard ellipse edge to be used as the characteristic quantity for judging whether the ellipse exists or not. In addition, the method in the new ellipse detection algorithm based on the major axis and the duality is used for detecting ellipse parameters on the premise that the target is an ellipse, and is the new ellipse detection algorithm based on the major axis and the duality.

The random hough transformation is widely applied to the detection of circular arcs and other patterns (circles, ellipses). The basic idea of the Hough transform is to adopt a voting mechanism for potential arcs on a picture, and finally, the algorithm determines the existence of an arc with the highest score by checking the highest voting score. The arc may be determined entirely by a certain number of points on it. An ellipse or circle may be defined by three points. Determination of an ellipse: 1. fitting ellipses by randomly selected points, 2. updating the accumulator array and the corresponding matching degree, 3. outputting the ellipses larger than the predefined threshold.

For example: patent CN104700420A discloses an ellipse detection method, system and worm egg identification method based on Hough transformation. The ellipse identification method disclosed in this method can be used for ellipse identification in this example.

The method comprises the following steps:

1: and acquiring the picture.

2: and carrying out edge detection on the acquired picture by using an edge detection method to obtain a binary picture. The specific edge detection method can adopt a log operator of zero crossing detection or a Canny operator of multi-scale space edge detection, and by using an edge detection method, pixel points of an ellipse edge are extracted, and the pixel points of the edge are feature points of the ellipse. Taking the example of using the Candy operator edge detection, the step of preprocessing the acquired picture to obtain a binary picture comprises the following steps:

21: and denoising the acquired picture by adopting Gaussian filtering.

22: and calculating gradient values of the binary image.

23: the non-maximum value inhibits searching for candidate points of the edge pixel points. By non-maxima suppression, coarse edge information in the picture can be extracted.

24: the edges are connected. The Canny edge detection algorithm uses a dual threshold segmentation process to edge join. The elliptical contour can be preliminarily extracted.

3: and carrying out Hough transformation on the binary image to obtain a candidate ellipse center.

An ellipse has the following two properties:

property 1, a connecting line between any point on the ellipse and other points on the ellipse forms a group of chords of the ellipse, and the midpoints of the chords form a new ellipse, which is called an inscribed ellipse of the original ellipse at the point.

And 2, two points with opposite directions of the outer normal on the ellipse are called a pair of dual points of the ellipse, and the middle point of the connecting line of all the dual points on the ellipse is the center of the ellipse.

From property 1, the midpoints of the connecting lines between the non-parity points on the ellipse are scattered at all positions, and from property 2, the midpoints of the connecting lines of all the parity points on the ellipse are concentrated at the center of the ellipse. Therefore, if each edge point in the edge binary image of the original image is connected with other points, and voting statistics is performed on the midpoint of the connecting line in the parameter space, a peak value of the statistical value will appear at the center of each ellipse, and the point corresponding to the maximum peak value is the center of the candidate ellipse.

4: and according to the coordinates of the center of the candidate ellipse and the parameter equation of the ellipse, carrying out Hough transformation to obtain the parameters of the candidate ellipse.

5: and judging whether the candidate ellipse obtained according to the parameter of the candidate ellipse and the center of the candidate ellipse is a true ellipse or not.

6: if the candidate ellipse is determined to be a true ellipse, the true ellipse is located.

The Hough transform-based ellipse detection method extracts an ellipse outline by using an edge detection method, Hough transform is performed twice, a candidate ellipse center is obtained by Hough transform for the first time, parameters of a candidate ellipse are obtained by voting through second Hough transform in combination with an ellipse parameter equation, a candidate ellipse is obtained according to the ellipse center and the ellipse parameters, on the basis, whether the obtained candidate ellipse is a true ellipse or not is judged, and the true ellipse is positioned. The method eliminates false ellipses by performing true and false judgment on the obtained candidate ellipses, thereby reducing the false detection rate, improving the accuracy of ellipse detection and effectively positioning the ellipses.

Another example is: the patent CN103632366A discloses a method for identifying parameters of an ellipse target, which can be used for ellipse identification in the present invention. An ellipse identification method, comprising:

step 1: and establishing a mapping relation from the middle track fitting ellipse half-axis length to the real ellipse half-axis length.

Step 2: and (4) performing image noise reduction on the elliptical target needing parameter identification, and extracting an edge point set of the elliptical target.

And step 3: and preliminarily fitting the ellipse according to the edge point set to obtain initial parameters of the fitted ellipse.

And 4, step 4: and acquiring a middle track point set according to the initial parameters.

And 5: and fitting the ellipse according to the intermediate trajectory point set to obtain parameters of the intermediate trajectory fitting ellipse.

In the method for identifying the parameters of the elliptical target, firstly, a mapping relation from the middle track fitting elliptical half-axis length to the real elliptical half-axis length is established, then, the elliptical target needing parameter identification is preliminarily fitted to obtain a middle track point set, and finally, the half-axis length parameters of the elliptical target are corrected through the established mapping relation.

Example five:

this example describes a method to accurately find an oxygen indicator from the found ellipses.

The oxygen indicator is a circular card, and shows an oval shape on pictures shot at different angles, but the major axis of the oval is invariable from any angle, and a major axis parameter variation range is set by taking into account the variation of the major axis 201 of the oval formed by the distance variation between the oxygen indicator and the camera.

Of course, the picture may have circular or elliptical color blocks not only for the oxygen indicator, but also for other circles formed by the light spots and the brush-like patterns on the packaging bag, circular or elliptical patterns may be formed, so that the pattern of the color block of the oxygen indicator is found from the searched elliptical pattern.

A method of identifying the color of an oxygen indicator on a package.

Processing, by the computer processor, the example three acquired pictures in the following order of steps:

1. searching a color block 200 with an elliptical characteristic in the picture according to the following conditions:

(condition i) edge features; (condition ii) length of major axis of ellipse.

Edge characteristics: the ellipse obtained by the example four has the edge characteristics required for the present example condition. However, it is required that at least 70% of the edge features are actually visible edge pixels, not the virtual edge 202 obtained by the connected edge calculation; most picture disturbances can be excluded.

(condition ii) the major axis of the ellipse is substantially equal to the standard preset diameter. The difference value between the acquired ellipse long axis length and the preset long axis length is required to be within 6% so as to further eliminate most picture interference.

By these two conditions, the obtained color block can be regarded as an elliptical picture formed by the pictures of the oxygen indicator. Wherein, the difference between the obtained ellipse long axis length and the preset long axis length is required to be within 6%, which considers that: the oxygen indicator is a round sheet, the shape and size of each oxygen indicator are certain (batch size), the main reason for the size difference of the oxygen indicators in the obtained pictures is the distance from the camera, the time window for shooting can be reduced, the difference of the distance between the packaging bag and the camera when the pictures are obtained is small, and therefore the fact that the size of the oxygen indicators in each picture is basically one to one is basically guaranteed, but the difference exists all the time, and only one difference interval needs to be set.

When the color block with the elliptic feature in the picture is found through the above conditions, 30% of the edges at maximum can be the virtual edge 202, and when the edge is less than 100%: therefore, in the found edge, the virtual straight line between the two pixels with the largest distance is used as the ellipse major axis 201, so that the computation amount can be reduced, and the major axis can be basically found (the deviation is within the acceptance range). And drawing a perpendicular line perpendicular to the long axis at the midpoint of the found long axis, and defining the intersection point of the perpendicular line and the perpendicular point of the long axis and the edge of the ellipse as the radius of the short axis, thus finding the minor short axis.

The definition of a circle is included in the definition of an ellipse, and when the ellipse in the found picture is a circle, the arbitrary diameter of the circle is taken as the ellipse major axis 201, and the ellipse minor axis is equal to the major axis.

Example six:

of all the elliptical patches 200, the patch 200 having the largest area value of visible pixels (excluding white pixels) is found. The larger the area value is, the larger the calculation base number is when calculating the chroma average value in the later step, so that the obtained result is more accurate.

The method for finding the maximum area value of the color block 200 comprises the following steps:

1. if a plurality of color blocks with the elliptical characteristics are obtained by the method of the fifth example, the method is preferred.

Firstly, carrying out binarization processing on the color block 200, and obtaining a closed edge of a black pixel area; and a black pixel area value. And then comparing the area value of each black pixel area, and taking at least one color block with the maximum area value. This excludes more patches of white pixels.

2. When only one color block with the elliptic characteristic is found by the method of the fifth example, the area value is directly determined to be the maximum; and the black pixel area closed edge of this color block is obtained.

And after the color block with the largest black pixel area is found, the ellipticity is contrasted, and the color block with smaller ellipticity has larger area value. The above step calculates the area value of the black area, but instead of the picture having the larger area value of black, the ellipticity is smaller, for example: if the edge of the found elliptical color block is missing, the area of the black area of the color block is also missing, which causes a situation of small ellipticity, but the black area is also small, so that it is necessary to identify the ellipticity after finding a plurality of color blocks with the largest black pixel area, and find the picture of the color block with the smallest ellipticity.

3. And finding out a picture of a color block with small ellipticity. The method for finding out the color block with small ellipticity comprises the following steps:

color blocks with small ellipticity are found out by calculating the lengths of the major axis 201 and the minor axis of the ellipse and the ratio of the two. And identified as a picture of a color patch with small ellipticity.

The calculation formula is as follows:

ovality = (major axis-minor axis)/100%

Definition of ellipticity size: ovality is large, i.e. the difference between the major and minor axes is larger, ovality is small, i.e. the difference between the major and minor axes is smaller, e.g. "circle" belongs to the ellipse with the smallest ovality.

Of course, when there is only one picture with the largest area value, the picture is determined as a picture with small color block ovality.

The method has important significance for identifying accurate chromaticity by finding a color block with smaller ellipticity:

a small color block ellipticity means: the axis of the camera is more vertical to the surface of the color block, and the chroma of the obtained picture is closer to the color displayed by the color block. The ovality of the color block is large, namely a large included angle is formed between the axis of the camera and the surface of the color block, the color of a shot picture is distorted under the interference of light, the shot picture usually visually feels that an object becomes dark, the saturation is reduced, and a warm picture approaches to a cold tone. The chroma of the picture identified in the way has deviation, and the oxygen indicator card with subtle chroma change influences the identification result.

Example seven:

this example describes an accurate chroma operation from the found ellipses.

The method specifically comprises the following steps: all the pixel points with the number of the symmetrical pixel points being not 0 in the closed edge of the black pixel area and taking the ellipse long axis 201 as the symmetry axis are taken, and the average value of the chromaticity of all the pixel points is calculated.

The black pixel area is closed edge, which is not necessarily equal to the edge of an ellipse, when the feature of the edge of the ellipse is less than 100%, the feature of a part of the edge is a virtual edge 202 deduced by calculation, and is not a real edge of the ellipse of the color block 200, when the edge of the ellipse is missing, the ellipse is also an incomplete ellipse (the missing part is probably caused by strong reflection, namely, the picture is high in light), the color block 200 is necessarily in an incomplete state, if the average value of the chroma in the whole edge of the ellipse is calculated in this state, the missing part 203 is blank or white, and is added to the average operation of the chroma, the result is likely to be inaccurate. The missing part 203 must therefore be clipped, which is done along the closed edge of the black pixel area, i.e. completely a part with accurate chrominance.

When the chromaticity operation is performed, the chromaticity of one point on the whole picture can be taken as the chromaticity of the whole oxygen indicator, but the chromaticity is extremely inaccurate, and the chromaticity of one pixel point is difficult to represent the chromaticity of the whole color block 200, so that only the average value can be obtained.

Referring to fig. 3, when the major axis 201 of the ellipse is taken as the symmetry axis, all the pixels with the number of symmetric pixels not equal to 0 are included, after the missing portion 203 is eliminated, when the major axis is taken as the symmetry axis, the pixels of the missing portion are 0, and what is symmetric with the missing portion is the missing symmetric portion 204, so that the average number is accurately calculated, and the missing symmetric portion 204 is eliminated at the same time, which is also the following: in the case where the long axis is the axis of symmetry, the pixels point-symmetric to the pixel 0 are also added to this portion to the average calculation. The average value is calculated by taking the long axis as a symmetry axis, and pixels on two sides of the long axis sum to the symmetry axis and then calculate the average value, so that the distortion of pixels far away from the long axis area when the oxygen indicator card is bent (changed into a curved surface) can be effectively avoided, the pixels are called as a second area 207, and the picture in the distortion area is dark. Referring to fig. 2, it is found through multiple comparisons that the more the pixels in the region near the long axis, the more the chromaticity is closer to the truest, which is called the middle region; certainly, the part closer to the camera is sometimes more real in chromaticity than the long axis, but sometimes the area is a highlight area, called as a first area 205, and the chromatic reality of the areas is random and not certain; sometimes the map of the first region may be missing, possibly complete and also optimal in color, sometimes the middle region is highlighted, etc.; the computer can only calculate the elliptical shape, but cannot judge which position is closer to the camera and has more real chromaticity and which side is farther and has more gray chromaticity, and cuts off the pixels with 0 and the point-symmetrical pixels of the pixels with 0, so that the far low dark chromaticity is averaged by the symmetrical high bright pixels at the near position, or the far high bright pixels and the near low dark pixels are averaged, and the chromaticity is more accurate in a word. Without causing the low and dark pixel values of a certain area to affect the average value too much.

For extraction of chromaticity, a pixel point is separated to separate three components of R, G and B, and in common pixel processing, the three components of R, G and B can be separated by using a PIL library and an opencv library.

With the library of opencv, the isolated calculation code references: from PIL Image img = Image. open ("demo.jpg") img _ array = img.load (), and then the pixel value is read by img _ array [ x, y ].

With numpy library of python, the isolated computational code references: import cv2 import numpy as np img = cv2.read ("demo. jpg") then reads the pixel value by img _ array [ x, y ].

The average value obtained in the above manner was regarded as the chromaticity of the elliptical marker.

After the chromaticity of the oval marker is calculated, the calculated chromaticity is compared with the standard preset chromaticity, and the state of the oval marker can be known.

When the oval marker is an oxygen indicator, the standard preset color is as follows:

the oxygen concentration is more than or equal to 1 percent of blue (RGB chroma: R120-135, G140-160, B225-245);

the oxygen concentration is approximately equal to 0.5 percent purple (RGB chroma: R205-230, G180-210, B225-240);

the oxygen concentration is less than or equal to 0.1 percent of red (RGB chroma: R230-260, G155-180, B205-225).

By comparing the RGB colorimetric values, the oxygen indicator can be known to be blue/purple/red, and the oxygen environment in the interlayer of the packaging bag can also be known to be more than 1% or equal to 0.5% or less than 0.1%.

Example eight:

in the above examples three-seven, after the picture is obtained, the computer may perform the following operations on the picture:

firstly, printing an independent first label on each acquired picture, wherein the first label at least comprises the number of the packaging bag and the number of the picture, for example: one of the pictures was given n00015p03, defined as package No. 00015, picture 03.

And printing a second label corresponding to the first label on each color block with the oval feature, for example, the oval color block is cut from the picture (N00015P 03), so that the oval color block is printed with the second label (tn 00015P 03), and therefore, the first label and the second label can correspond the color block to the picture of the color block taken out.

After the subsequent various processes, when the average chroma value is finally calculated by the color block tn00015p03, the 03 th picture (picture n00015p 03) of the package No. 00015 is taken as the reference picture displayed on the detection interface. And the remaining pictures of package No. 00015 may be deleted.

The scheme is to separately calculate the picture and the color block pattern.

And secondly, only the picture is required to be recorded from which package the picture is shot, and in various later-stage operations, the color block is not scratched out of the picture, even if the color block is scratched out, the color block is scratched out under the condition of processing the picture, and the picture and the color block are not separated all the time, so that each step can be regarded as processing the color block in the picture instead of being regarded as independent processing of the color block. Therefore, after the chromatic value is obtained, the picture is directly set as a reference picture displayed on a system interface, and the picture does not need to be searched according to the color block number.

Example nine:

the picture display and operation provided by the example is realized by a display device and a command input device which are coupled with a processor.

The content displayed by the display device at least has a detection interface j2, and the detection interface j2 may be a desktop window, a full-screen window or a reduced window opened in the operating system window j 1.

The detection interface j2 is coupled to the processor for performing operations in the detection interface, including displaying information and entering commands in the detection interface.

Two viewing modes are included in the detection interface: (i) quick view mode, (ii) select view mode.

Referring to fig. 4, the quick-view mode is to display one of the pictures of each package in a thumbnail form for viewing on the detection interface j2, where the thumbnail is, for example, the picture with the average chroma value calculated as described in example eight, or the picture corresponding to the color block with the average chroma value calculated. When packages pass through the conveyor belt c5 one by one, the pictures of the packages after the chromaticity operation are sequentially displayed on the rolling area j4 of the detection interface one by one, for example, the packages are sequentially rolled from left to right in the detection interface, and each thumbnail is displayed for a certain time one by one; or arranged in batches, 6 thumbnail pictures are displayed at a time, and the pictures are arranged on the detection interface in a matrix. And the acquired colorimetric value information or result information is filled into the adjacent area j3 of the picture of each packaging bag, such as the top and the lower parts of the picture of the packaging bag, and the colorimetric information of the marker of the packaging bag calculated by the remark detection system can be information such as whether oxygen exists or not, the estimated value of the oxygen concentration and the like. The picture and chrominance information occupy most of the area of the detection interface, at least more than 60% of the display area, and only a small area is used for displaying necessary operation buttons (operation buttons such as closing/setting/attribute/version information).

Referring to fig. 5, a viewing mode is selected. One of all pictures displayed in the quick view mode is statically displayed in a large picture viewing area j6 in the detection interface, and the picture occupies at least more than 70% of the display area of the detection interface, which is equivalent to performing large picture viewing specifically for the picture. The remaining area of the detection interface has a display operable icon area and an associated information area j5 of the picture including chrominance information. It is necessary to implant the package number. The selection viewing mode is selected for selection when specific viewing is necessary for a certain packaging bag, for example, the result of calculating the oxygen indicator chromaticity of the packaging bag is questioned, and pictures can be selected to be carefully viewed by naked eyes.

Operation calling: and selecting one of the pictures in the quick view mode, converting the selected picture into a selected view mode, and selecting the picture displayed in the selected view mode to be the selected picture in the quick view mode.

And at least calling a picture saturation adjusting function and a picture brightness adjusting function under the selection viewing mode. Therefore, the brightness and the saturation of the picture and the marker in the picture can be adjusted, so that the color of the oxygen indicator can be visually seen by an operator, and the brightness and the saturation are adjusted, so that the operator can compare the color of the oxygen indicator under different brightness and saturation to eliminate the color distortion caused by a camera and a shooting environment.

The first mode (when the command input device adopts a touch screen and performs display and command input):

and performing display and command input by using the touch screen, wherein the gesture input calls a function of directionally touching the touch screen along the first direction d1 and calling one of a picture saturation adjusting function and a picture brightness adjusting function. The first direction d1 is used for directional touch on the touch screen in the detection interface, and can be set as a single gesture on the detection interface, the touch screen is touched from the lower part to the upper part of the detection interface, when the touch screen obtains input capacitance change, a background display brightness adjusting function is called, and simultaneously, input brightness adjustment is set, and in the uninterrupted process of touch, brightness is linearly increased, the original brightness of a picture is set as system brightness, and when the touch gesture moves from the lower part to the upper part, the brightness adjusting function is called out, so that the brightness of the picture is continuously increased to the maximum 100%. Of course, the first direction includes a reciprocating direction in this direction, that is, in this example, when the first direction d1 includes a touch from below to above of the detection interface, that is, a touch from above to below of the detection interface, the display brightness adjustment function in the background is also called up when the first direction d1 is downward from above, and is set to input brightness adjustment while calling up, and the brightness of the original picture is set to be the system brightness by linearly decreasing the brightness during the uninterrupted touch, and when the touch gesture moves downward from above, the brightness adjustment function is called up to make the picture brightness continuously decrease to 50% of the system brightness.

When the directional touch of the first gesture is to call the brightness adjusting function, the brightness adjusting function is eliminated in the second direction, and the display saturation adjusting function is achieved.

The second direction is used for directionally touching the touch screen, the single gesture can be set to be a single gesture on the detection interface, the touch screen is touched from the left side to the right side of the display interface, when the touch screen obtains input capacitance change, a background display saturation adjusting function is called, input brightness adjustment is set while calling, in the uninterrupted process of touch, the saturation is linearly increased, the original saturation of the picture is set to follow the system, and when the touch gesture moves from the left side to the right side, the saturation adjusting function is called, and the picture saturation is continuously increased to the maximum 100%. Of course, the second direction includes a reciprocating direction in this direction, that is, in this example, when the second direction includes a touch from the left side to the right side of the detection interface, that is, a touch from the right side to the left side of the detection interface, the background saturation adjustment function is also called up when the touch is from the right side to the left side, and is set to input brightness adjustment while being called up, and the saturation is linearly reduced during the uninterrupted touch, the original saturation of the picture is set to follow the system, and when the touch gesture moves from the right side to the left side, the saturation of the picture is continuously reduced to the gray scale mode (the saturation is 0).

Second mode (when the command input device is a mouse and a command is input):

desktop/notebook computers with mouse input are more used in the production line, and touch screen input devices (tablet computers) are not basically used, so that the second mode is more convenient to use. The second approach is substantially similar to the first approach, except that: the first mode is a touch input on a touch screen of an operation gesture, and the second mode is a slide-through input of a mouse in a detection interface, wherein the slide-through is performed under the condition that any button of the mouse is not clicked or pressed. The brightness/saturation adjustment function can be called only when the mouse slides within the picture display area.

Similarly, the mouse input also has a first direction and a second direction, and the first mode (touch screen for display and input) is referred to with respect to the definition and usage of the first direction and the second direction, which need not be repeated here.

In both the first and second embodiments, the first direction is not necessarily from top to bottom, and may be from left to right; when the first direction is a left-to-right gesture, the second direction should be a top-to-bottom gesture.

Example ten

Since the actions of mouse-over and gesture-touching are one, only the touch gesture input is exemplified here, but it should be understood that the touch gesture input of the present example is also applicable to mouse-over input.

The first direction d1 is: the touch gesture s1 is a process of sequentially passing through rows of pixels from one direction or a process of columns of pixels. The progression of the example touch gesture s1 through the rows of pixels in succession from one direction is a first direction. The display is made up of several pixel rows x1 and pixel columns x2, with pixel rows x1 arranged horizontally, distributed row by row. The process of sequentially passing through the pixel row x1 described herein is, for example, sliding through the lower 20 th row of pixels, the upper 21 st row, the upper 22 nd row, and the upper 23 rd row … in sequence, and is considered to have a one-direction sliding condition, and is not necessarily required to slide through one or a limited plurality of pixel columns of pixel rows, and is further explained as: when the user slides up from the 20 th pixel row below, the user still considers that the first direction is provided if the user slides in the 20 th row of the 1 st column, the 21 st row of the 2 nd column, the 22 nd row of the 4 th column and the 23 rd row … of the 5 th column (in this case, the gesture does not slide through the pixel row perpendicularly).

The second direction d2 is the process that the touch gesture s1 passes through the pixel column x2 in succession from one direction; or a pixel column process. When the process of the touch gesture passing through the pixel rows from one direction is the first direction, the process of the touch gesture passing through the pixel columns from one direction is the second direction. The pixel columns of the display are arranged perpendicular to the pixel rows and are distributed in the vertical direction. The process of sequentially passing through pixel columns described herein is, for example, sliding from the 30 th column of pixels on the left to the 31 st, 32 nd, 33 rd columns … on the right in turn, and is considered to have a one-direction sliding condition, and does not necessarily require sliding through pixel columns in one or a limited plurality of pixel rows, which is further explained as: when the pixel on the left 300 th row slides to the right, … (in this case, the gesture is not sliding through the pixel column substantially perpendicular) is still the condition for having the second direction if the sliding is performed on the 30 th row, the 31 st column of the 11 th row, the 32 nd column of the 12 th row, and the 23 rd column of the 13 th row of the 10 th row.

Therefore, when the touch gesture s1 moves from one direction and passes through the pixel rows and the pixel columns in succession while moving at the same time, it is determined that the first direction d1 and the second direction d2 are satisfied at the same time, and the picture saturation adjustment function and the picture brightness adjustment function are invoked at the same time. Referring to fig. 6, for example: while the touch gesture s1 slides from the lower left corner to the upper right corner of the display screen, it will simultaneously pass through the pixel rows and pixel columns (40 rows and 55 columns, 41 rows and 56 columns, 42 rows and 57 columns, 43 rows and 58 columns, and 44 rows and 59 columns …), and in this case, it is determined that the picture saturation adjusting function and the brightness adjusting function are simultaneously invoked.

In addition, the first direction, the second direction, and the like in this example and example nine are necessarily the first direction and the second direction performed in the detection interface, and are particularly mainly performed in the picture display area, and at least the display area of the operation buttons such as off/set/attribute/version information and the like, and the display area of the interface frame are to be excluded. When a touch gesture or a mouse is slid, a part of the track is in the detection interface (another part of the track is only in the system interface), and the part of the touch gesture or the mouse sliding track in the detection interface is considered as a first direction and a second direction.

The above examples one to ten are not single examples, and new examples may be composed under possible combinations, but the composed new examples must not depart from the core idea of the present invention. Moreover, if certain example combinations conflict with and contradict the inventive concepts of this patent, the examples should not be simply combined and adjustments to eliminate conflicts and conflicts should be avoided or made after combination.

The method is programmed by codes, is set as a special computer program and is installed in a computer to run. In operation, the gesture actions (including the first direction gesture, the second direction gesture, the mouse sliding in the first direction, and the mouse sliding in the second direction) may form an instruction or an instruction set recognizable by the computer system during the operation.

Additionally, a processor of the present invention may execute one or more sets of instructions that cause the method in the above example to be performed. The set of instructions, the instructions, and the like, may refer to instructions that, when executed by the processor, cause the processor to perform one or more operations of the detection interface. Operating in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. A processor may be a Personal Computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a mobile telephone, a web appliance, a server, a network router, switch or bridge, or any other processor-enabled computing device capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that processor.

Thus, the processor in the example is installed in a computer, which includes a processor, a main memory.

The processor is a microprocessor or central processing unit, etc. More specifically, the processor may be a Complex Instruction Set Computing (CISC) microprocessor, Reduced Instruction Set Computing (RISC) microprocessor, Very Long Instruction Word (VLIW) microprocessor, or a processing device implementing other instruction sets or a processor implementing a combination of instruction sets.

It should be noted that the present example is not limited to the only implementation of the present invention, but rather to one or more of the many ways in which the present invention may be implemented.

Other solutions obtained without departing from the core idea of the invention fall within the scope of protection of the invention.

23页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种缺陷检测方法、装置、计算机设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!