Image processing method and device, electronic equipment and storage medium

文档序号:1832030 发布日期:2021-11-12 浏览:5次 中文

阅读说明:本技术 一种图像处理方法、装置、电子设备及存储介质 (Image processing method and device, electronic equipment and storage medium ) 是由 李通 马志超 于 2021-08-04 设计创作,主要内容包括:本发明提供了一种图像处理方法、装置、电子设备及存储介质。获取包括车辆的目标VIN图样的待识别图像,目标VIN图样中包括多个文本图样。基于已集成的位置检测模型,在待识别图像中确定至少一个位置区域。在至少一个位置区域中,筛选包括的文本图样的宽高比符合预设VIN宽高比要求且字体符合预设VIN字体要求的位置区域。基于已集成的文本识别模型,识别筛选出的位置区域中的连续的多个文本图样分别对应的文本字符。基于预设VIN字符配置要求,在识别出的文本字符中确定目标VIN图样中的各个文本图样分别对应的文本字符。通过本申请,可以不依赖网络以及节省网络资源,且可以提高最终确定出的目标VIN图样中的各个文本图样分别对应的文本字符的准确率。(The invention provides an image processing method, an image processing device, electronic equipment and a storage medium. The method comprises the steps of obtaining an image to be identified, including a target VIN pattern of a vehicle, wherein the target VIN pattern includes a plurality of text patterns. At least one location area is determined in the image to be recognized on the basis of the integrated location detection model. And in at least one position area, screening the position area which comprises the text pattern with the aspect ratio meeting the preset VIN aspect ratio requirement and the font meeting the preset VIN font requirement. And identifying text characters respectively corresponding to a plurality of continuous text patterns in the screened position area based on the integrated text identification model. And determining text characters respectively corresponding to each text pattern in the target VIN pattern in the recognized text characters based on the preset VIN character configuration requirement. By the method and the device, the network is not depended on, the network resources are saved, and the accuracy of the text characters corresponding to each text pattern in the finally determined target VIN pattern can be improved.)

1. An image processing method applied to an electronic device, the method comprising:

acquiring an image to be identified, wherein the image to be identified at least comprises a target VIN pattern of a vehicle, and the target VIN pattern comprises a plurality of text patterns;

determining at least one position area in the image to be recognized based on a position detection model integrated in the electronic equipment, wherein each position area comprises a plurality of continuous text patterns; the plurality of text patterns in one of the at least one location area comprise respective ones of the target VIN patterns;

screening a position area, in which the aspect ratio of the included text pattern meets the requirement of a preset VIN aspect ratio and the font of the included text pattern meets the requirement of a preset VIN font, in at least one position area; a plurality of text patterns in one of the screened location areas comprise each text pattern in the target VIN patterns;

identifying text characters respectively corresponding to a plurality of continuous text patterns in the screened position area based on a text identification model integrated in the electronic equipment;

and determining text characters respectively corresponding to each text pattern in the target VIN patterns in the recognized text characters based on preset VIN character configuration requirements.

2. The method of claim 1, wherein the text recognition model is trained by:

obtaining a plurality of sample data sets, wherein each sample data set comprises a sample image, the sample image at least comprises a sample VIN pattern of a vehicle, the sample VIN pattern comprises a plurality of sample text patterns, and the sample data sets further comprise: marking text characters corresponding to the sample text patterns respectively;

and training the model by using a plurality of sample data sets until parameters in the model are converged, thereby obtaining the text recognition model.

3. The method of claim 2, wherein said obtaining a plurality of sample data sets comprises:

acquiring a first image, wherein the first image comprises a first VIN pattern of a vehicle; the first VIN pattern comprises a plurality of first text patterns, a labeling area of the first VIN pattern in the first image is obtained, and labeling text characters corresponding to the first text patterns are obtained;

identifying a plurality of first text patterns in the first VIN pattern in the labeling area of the first image according to labeling text characters respectively corresponding to the plurality of first text patterns;

generating a second VIN pattern different from the first VIN pattern according to the plurality of first text patterns;

generating a second image according to the second VIN pattern and a preset background image;

generating a sample data set according to the first image and the labeled text characters corresponding to the plurality of first text patterns respectively; acquiring text characters corresponding to a plurality of second text patterns in the second VIN pattern; and generating a sample data set according to the text characters respectively corresponding to the second image and the plurality of second text patterns.

4. The method of claim 3, wherein generating a second image according to the second VIN pattern and a predetermined background image comprises:

generating an intermediate image according to the second VIN pattern and a preset background image;

and adding noise data in the intermediate image according to a random noise generation algorithm to obtain the second image.

5. The method of claim 3, wherein generating a second image according to the second VIN pattern and a predetermined background image comprises:

generating an intermediate image according to the second VIN pattern and a preset background image;

blurring the intermediate image to obtain a blurred image;

and acquiring the second image according to the blurring image.

6. An image processing apparatus applied to an electronic device, the apparatus comprising:

the identification method comprises a first obtaining module, a second obtaining module and a third obtaining module, wherein the first obtaining module is used for obtaining an image to be identified, the image to be identified at least comprises a target VIN pattern of a vehicle, and the target VIN pattern comprises a plurality of text patterns;

a first determination module, configured to determine at least one location area in the image to be recognized based on a location detection model that is integrated in the electronic device, where each location area includes a plurality of continuous text patterns; the plurality of text patterns in one of the at least one location area comprise respective ones of the target VIN patterns;

the screening module is used for screening the position areas, in at least one position area, of which the aspect ratio of the included text patterns meets the requirement of the preset VIN aspect ratio and the fonts of the included text patterns meet the requirement of the preset VIN fonts; a plurality of text patterns in one of the screened location areas comprise each text pattern in the target VIN patterns;

the identification module is used for identifying text characters corresponding to a plurality of continuous text patterns in the screened position area based on a text identification model integrated in the electronic equipment;

and the second determining module is used for determining text characters corresponding to each text pattern in the target VIN patterns in the recognized text characters based on preset VIN character configuration requirements.

7. The apparatus of claim 6, further comprising:

a second obtaining module, configured to obtain multiple sample data sets, where each sample data set includes a sample image, the sample image at least includes a sample VIN pattern of a vehicle, and the sample VIN pattern includes multiple sample text patterns, and the sample data set further includes: marking text characters corresponding to the sample text patterns respectively;

and the training module is used for training the model by using a plurality of sample data sets until parameters in the model are converged, so that the text recognition model is obtained.

8. The apparatus of claim 7, wherein the second obtaining module comprises:

the vehicle identification device comprises a first acquisition unit, a second acquisition unit and a control unit, wherein the first acquisition unit is used for acquiring a first image, and the first image comprises a first VIN pattern of a vehicle; the first VIN pattern comprises a plurality of first text patterns, a labeling area of the first VIN pattern in the first image is obtained, and labeling text characters corresponding to the first text patterns are obtained;

the identification unit is used for identifying a plurality of first text patterns in the first VIN pattern in the labeling area of the first image according to labeling text characters respectively corresponding to the plurality of first text patterns;

a first generating unit, configured to generate a second VIN pattern different from the first VIN pattern according to the plurality of first text patterns;

the second generating unit is used for generating a second image according to the second VIN pattern and a preset background image;

a third generating unit, configured to generate a sample data set according to the first image and the labeled text characters corresponding to the plurality of first text patterns, respectively;

a second obtaining unit, configured to obtain text characters corresponding to a plurality of second text patterns in the second VIN pattern;

and the fourth generating unit is used for generating a sample data set according to the second image and the text characters respectively corresponding to the plurality of second text patterns.

9. The apparatus of claim 8, wherein the second generating unit comprises:

the first generating subunit is used for generating an intermediate image according to the second VIN pattern and a preset background image;

and the adding subunit is used for adding noise data in the intermediate image according to a random noise generation algorithm to obtain the second image.

10. The apparatus of claim 8, wherein the second generating unit comprises:

the second generating subunit is used for generating an intermediate image according to the second VIN pattern and a preset background image;

a blurring subunit, configured to perform blurring processing on the intermediate image to obtain a blurred image;

and the obtaining subunit is used for obtaining the second image according to the blurring image.

11. An electronic device, comprising: processor, memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the image processing method according to any one of claims 1 to 5.

12. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 5.

Technical Field

The present invention relates to the field of computer technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.

Background

A VIN (Vehicle Identification Number) is a unique set of numbers consisting of seventeen letters or numbers for identifying vehicles. The VIN may be fixed at a position below a front windshield of the automobile, and may also be set on a driving license of the vehicle, and the VIN of the vehicle often needs to be identified in a scene of vehicle identification.

At present, a camera can be used to capture an image including a VIN on the lower side of a front windshield of a vehicle, and the captured image is uploaded to a cloud, so that the cloud can identify the VIN in a VIN label in the image by using a multi-domain common OCR (Character Recognition) technology on the market, so as to obtain the VIN of the vehicle returned by the cloud.

However, the inventors have found that the above approach has strong dependence on the network, and the VIN of the vehicle cannot be obtained without the network.

Secondly, transmitting the image to the cloud and receiving the VIN of the vehicle returned by the cloud consume more network resources.

And, the accuracy of identifying the VIN in the VIN label in the image is low.

Disclosure of Invention

The application discloses an image processing method, an image processing device, electronic equipment and a storage medium.

In a first aspect, the present application shows an image processing method applied to an electronic device, the method including:

acquiring an image to be identified, wherein the image to be identified at least comprises a target VIN pattern of a vehicle, and the target VIN pattern comprises a plurality of text patterns;

determining at least one position area in the image to be recognized based on a position detection model integrated in the electronic equipment, wherein each position area comprises a plurality of continuous text patterns; the plurality of text patterns in one of the at least one location area comprise respective ones of the target VIN patterns;

screening a position area, in which the aspect ratio of the included text pattern meets the requirement of a preset VIN aspect ratio and the font of the included text pattern meets the requirement of a preset VIN font, in at least one position area; a plurality of text patterns in one of the screened location areas comprise each text pattern in the target VIN patterns;

identifying text characters respectively corresponding to a plurality of continuous text patterns in the screened position area based on a text identification model integrated in the electronic equipment;

and determining text characters respectively corresponding to each text pattern in the target VIN patterns in the recognized text characters based on preset VIN character configuration requirements.

In an alternative implementation, the training of the text recognition model includes:

obtaining a plurality of sample data sets, wherein each sample data set comprises a sample image, the sample image at least comprises a sample VIN pattern of a vehicle, the sample VIN pattern comprises a plurality of sample text patterns, and the sample data sets further comprise: marking text characters corresponding to the sample text patterns respectively;

and training the model by using a plurality of sample data sets until parameters in the model are converged, thereby obtaining the text recognition model.

In an optional implementation manner, the obtaining a plurality of sample data sets includes:

acquiring a first image, wherein the first image comprises a first VIN pattern of a vehicle; the first VIN pattern comprises a plurality of first text patterns, a labeling area of the first VIN pattern in the first image is obtained, and labeling text characters corresponding to the first text patterns are obtained;

identifying a plurality of first text patterns in the first VIN pattern in the labeling area of the first image according to labeling text characters respectively corresponding to the plurality of first text patterns;

generating a second VIN pattern different from the first VIN pattern according to the plurality of first text patterns;

generating a second image according to the second VIN pattern and a preset background image;

generating a sample data set according to the first image and the labeled text characters corresponding to the plurality of first text patterns respectively; acquiring text characters corresponding to a plurality of second text patterns in the second VIN pattern; and generating a sample data set according to the text characters respectively corresponding to the second image and the plurality of second text patterns.

In an optional implementation manner, the generating a second image according to the second VIN pattern and a preset background image includes:

generating an intermediate image according to the second VIN pattern and a preset background image;

and adding noise data in the intermediate image according to a random noise generation algorithm to obtain the second image.

In an optional implementation manner, the generating a second image according to the second VIN pattern and a preset background image includes:

generating an intermediate image according to the second VIN pattern and a preset background image;

blurring the intermediate image to obtain a blurred image;

and acquiring the second image according to the blurring image.

In a second aspect, the present application shows an image processing apparatus applied to an electronic device, the apparatus comprising:

the identification method comprises a first obtaining module, a second obtaining module and a third obtaining module, wherein the first obtaining module is used for obtaining an image to be identified, the image to be identified at least comprises a target VIN pattern of a vehicle, and the target VIN pattern comprises a plurality of text patterns;

a first determination module, configured to determine at least one location area in the image to be recognized based on a location detection model that is integrated in the electronic device, where each location area includes a plurality of continuous text patterns; the plurality of text patterns in one of the at least one location area comprise respective ones of the target VIN patterns;

the screening module is used for screening the position areas, in at least one position area, of which the aspect ratio of the included text patterns meets the requirement of the preset VIN aspect ratio and the fonts of the included text patterns meet the requirement of the preset VIN fonts; a plurality of text patterns in one of the screened location areas comprise each text pattern in the target VIN patterns;

the identification module is used for identifying text characters corresponding to a plurality of continuous text patterns in the screened position area based on a text identification model integrated in the electronic equipment;

and the second determining module is used for determining text characters corresponding to each text pattern in the target VIN patterns in the recognized text characters based on preset VIN character configuration requirements.

In an optional implementation, the apparatus further comprises:

a second obtaining module, configured to obtain multiple sample data sets, where each sample data set includes a sample image, the sample image at least includes a sample VIN pattern of a vehicle, and the sample VIN pattern includes multiple sample text patterns, and the sample data set further includes: marking text characters corresponding to the sample text patterns respectively;

and the training module is used for training the model by using a plurality of sample data sets until parameters in the model are converged, so that the text recognition model is obtained.

In an optional implementation manner, the second obtaining module includes:

the vehicle identification device comprises a first acquisition unit, a second acquisition unit and a control unit, wherein the first acquisition unit is used for acquiring a first image, and the first image comprises a first VIN pattern of a vehicle; the first VIN pattern comprises a plurality of first text patterns, a labeling area of the first VIN pattern in the first image is obtained, and labeling text characters corresponding to the first text patterns are obtained;

the identification unit is used for identifying a plurality of first text patterns in the first VIN pattern in the labeling area of the first image according to labeling text characters respectively corresponding to the plurality of first text patterns;

a first generating unit, configured to generate a second VIN pattern different from the first VIN pattern according to the plurality of first text patterns;

the second generating unit is used for generating a second image according to the second VIN pattern and a preset background image;

a third generating unit, configured to generate a sample data set according to the first image and the labeled text characters corresponding to the plurality of first text patterns, respectively;

a second obtaining unit, configured to obtain text characters corresponding to a plurality of second text patterns in the second VIN pattern;

and the fourth generating unit is used for generating a sample data set according to the second image and the text characters respectively corresponding to the plurality of second text patterns.

In an optional implementation manner, the second generating unit includes:

the first generating subunit is used for generating an intermediate image according to the second VIN pattern and a preset background image;

and the adding subunit is used for adding noise data in the intermediate image according to a random noise generation algorithm to obtain the second image.

In an optional implementation manner, the second generating unit includes:

the second generating subunit is used for generating an intermediate image according to the second VIN pattern and a preset background image;

a blurring subunit, configured to perform blurring processing on the intermediate image to obtain a blurred image;

and the obtaining subunit is used for obtaining the second image according to the blurring image.

In a third aspect, the present application shows an electronic device comprising:

a processor;

a memory for storing processor-executable instructions;

wherein the processor is configured to perform the image processing method according to the first aspect.

In a fourth aspect, the present application shows a non-transitory computer readable storage medium having instructions which, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method according to the first aspect.

In a fifth aspect, the present application shows a computer program product, in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method according to the first aspect.

The technical scheme provided by the application can comprise the following beneficial effects:

in the method, an image to be recognized is obtained, the image to be recognized at least comprises a target VIN pattern of a vehicle, and the target VIN pattern comprises a plurality of text patterns. At least one location area is determined in the image to be recognized on the basis of a location detection model which has been integrated in the electronic device. And in at least one position area, screening the position area in which the aspect ratio of the included text pattern meets the preset VIN aspect ratio requirement and the font of the included text pattern meets the preset VIN font requirement. And identifying text characters respectively corresponding to a plurality of continuous text patterns in the screened position area based on a text identification model integrated in the electronic equipment. And determining text characters respectively corresponding to each text pattern in the target VIN pattern in the recognized text characters based on the preset VIN character configuration requirement.

In the application, the position detection model is integrated in the electronic device in advance, so that at least one position area can be determined in the image to be recognized directly based on the position detection model integrated in the electronic device, other devices are not needed, data interaction with other devices is not needed, and the purposes of not depending on a network and saving network resources in a scene of determining at least one position area in the image to be recognized are achieved.

And based on the preset VIN aspect ratio requirement and the preset VIN font requirement, the position area where the text pattern in the non-VIN pattern is located can be filtered in at least one position area, the range of text character recognition based on the text recognition model can be narrowed, and the accuracy of the text character corresponding to each text pattern in the finally determined target VIN pattern can be indirectly improved.

Secondly, because the text recognition model is integrated in the electronic device in advance, text characters corresponding to the continuous text patterns in the screened position area can be recognized directly based on the text recognition model integrated in the electronic device, so that data interaction with other devices is not needed without the help of other devices, and the purposes of not depending on a network and saving network resources in a scene of recognizing the text characters corresponding to the continuous text patterns in the screened position area are achieved.

In addition, based on the preset VIN character configuration requirement, text characters corresponding to text patterns in non-VIN patterns can be filtered out from the recognized text characters, so that the accuracy of the text characters corresponding to each text pattern in the finally determined target VIN patterns can be improved.

Drawings

Fig. 1 is a flowchart of the steps of an image processing method of the present application.

FIG. 2 is a flow chart of steps of an image processing method of the present application.

FIG. 3 is a flow chart of steps of an image processing method of the present application.

Fig. 4 is a block diagram of an image processing apparatus according to the present application.

FIG. 5 is a block diagram of an electronic device of the present application.

FIG. 6 is a block diagram of an electronic device of the present application.

Detailed Description

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

Referring to fig. 1, a flowchart illustrating steps of an image processing method according to the present application, which is applied to an electronic device, may specifically include the following steps:

in step S101, an image to be recognized is obtained, where the image to be recognized at least includes a target VIN pattern of a vehicle, and the target VIN pattern includes a plurality of text patterns.

In the application, the image to be recognized may be obtained by shooting by the electronic device, or may be obtained by shooting by other devices and transmitted to the electronic device.

One of the objectives of the present application is to identify text characters corresponding to respective ones of a vehicle's VIN pattern, which may be located in at least the following places: the driving license of the vehicle, the underside of the front windshield of the vehicle, the engine surface of the vehicle, and the like.

Thus, the image to be recognized may be a photographed image including a driving license of the vehicle, an image including an underside of a front windshield of the vehicle, an image including an engine surface of the vehicle, or the like.

The captured image to be recognized has a target VIN pattern of the vehicle, and the purpose of the present application is to recognize text characters corresponding to a plurality of text patterns in the target VIN pattern from the image to be recognized, which may be referred to specifically as the flow from step S102 to step S105, and will not be described in detail herein.

After the image to be recognized is obtained, the image to be recognized may be preprocessed, for example, the width and the height of the image to be recognized are adjusted to be preset widths and heights (for example, the widths and the heights required by the subsequent position detection model and the text recognition model, etc.), the color value sequence of the image to be recognized is readjusted to be preset color value sequence (for example, the color value sequence required by the subsequent position detection model and the text recognition model, etc.), for example, RBG data of the image to be recognized is converted into RGB data, etc., and the color value data of the image to be recognized may be converted from a bitmap into a tensor (required by the subsequent position detection model and the text recognition model), etc. Then, step S102 and the like are performed on the preprocessed image to be recognized.

In step S102, at least one location area is determined in the image to be recognized, each location area including a plurality of text patterns in succession, based on a location detection model that has been integrated in the electronic device.

The plurality of text patterns in one of the at least one location area includes respective ones of the target VIN patterns.

The position detection model may be trained based on a craft model.

In this application, the electronic device may include a front-end device or a back-end device, and the like.

The front-end device may include a device that can be directly controlled by a wide range of users, for example, a mobile phone, a tablet computer, a notebook computer, or a desktop computer.

The backend equipment may include a backend server or the like hosted by a service provider.

The electronic device may train the location detection model in the electronic device in advance and integrate the location detection model in the electronic device.

Alternatively, the electronic device may download the location detection model from another device in advance, and integrate the location detection model in the electronic device. After the position detection model is updated by other devices, the electronic device may also download a new position detection model from the other devices, and replace the position detection model that has been integrated in the electronic device with the new position detection model, so as to update the position detection model.

Thus, when at least one location area needs to be determined in the image to be recognized, the at least one location area may be determined in the image to be recognized directly based on the location detection model integrated in the electronic device, so that the location detection model may not be downloaded in real time from other devices, or the at least one location area may be determined in the image to be recognized by other devices and returned to the determined at least one area without uploading the image to be recognized to other devices, so that the electronic device may not receive the at least one location area determined in the image to be recognized by other devices and returned by other devices.

Therefore, the purposes of not depending on a network and saving network resources in a scene of determining at least one position area in the image to be recognized can be achieved.

Specifically, the electronic apparatus may input the image to be recognized into the position detection model so that the position detection model detects position areas each including a plurality of text patterns in succession in the image to be recognized and outputs coordinate information of the respective position areas and the like, and then the electronic apparatus may acquire the coordinate information of the respective position areas and the like output by the position detection model.

In one embodiment, for any one location area, the location area may be rectangular, and the coordinate information of the location area may include: the coordinates of one vertex of the rectangle in the image to be recognized, the width of the rectangle, the height of the rectangle, and the like.

Wherein the location detection model integrated in the electronic device may be a vectorized lite-format location detection model or the like.

In step S103, in at least one location area, a location area is screened, where the aspect ratio of the included text pattern meets the preset VIN aspect ratio requirement and the font of the included text pattern meets the preset VIN font requirement.

The plurality of text patterns in one of the screened location areas includes each of the target VIN patterns.

In this application, the official has a uniform specification for the style of the text pattern in the VIN pattern.

For example, the aspect ratio of each text pattern in the VIN pattern needs to meet a specific requirement, for example, a preset VIN aspect ratio requirement is met, the preset VIN aspect ratio requirement may be an aspect ratio interval, and the aspect ratio of each text pattern in the VIN pattern needs to be located in the aspect ratio interval.

That is, a text pattern is often not a text pattern of VIN if its aspect ratio is not within the aspect ratio interval, and may be a text pattern of VIN if its aspect ratio is within the aspect ratio interval.

For another example, the font of each text pattern in the VIN pattern needs to meet a specific requirement, for example, a preset VIN font requirement is met, the preset VIN font requirement may be at least one specific font, and the font of each text pattern in the VIN pattern needs to belong to at least one specific font.

That is, a text pattern is often not a text pattern of a VIN if the font of the text pattern does not belong to at least one particular font, and the text pattern may be a text pattern of a VIN if the font of the text pattern belongs to at least one particular font.

In a possible case, only one of the position areas obtained in step S102 has the VIN pattern, and the other areas do not have the VIN pattern.

Therefore, by means of the preset VIN aspect ratio requirement and the preset VIN font requirement, the position area without the VIN pattern can be removed in at least one position area.

Where the remaining location area is one, it is often the location area with the pattern of VIN.

In the case where the remaining location areas are at least two, one of the at least two location areas is a location area having a pattern of VIN, and each of the at least two location areas has a possibility of being a "location area having a pattern of VIN".

Specifically, in at least one location area, a location area may be filtered in which the aspect ratio of the included text pattern meets the preset VIN aspect ratio requirement and the font of the included text pattern meets the preset VIN font requirement.

For example, for any one location area, the location of each text pattern identified in that location area may be determined.

Then, for any text pattern, a circumscribed rectangle of the text pattern can be generated according to the position of the text pattern (the circumscribed rectangle of the text pattern can be generated by using an existing generation method on the market, and the specific method for generating the circumscribed rectangle is not limited), then the width and the height of the circumscribed rectangle are obtained, then the ratio of the width to the height of the circumscribed rectangle is calculated, the aspect ratio of the text pattern is obtained, and the same is true for each other text pattern.

If the aspect ratio of at least one text pattern in the position area does not meet the preset VIN aspect ratio requirement, the position area is not screened, if the aspect ratio of each text pattern in the position area meets the preset VIN aspect ratio requirement, the font of each sample pattern in the position area can be identified, and then whether the font of each sample pattern in the position area meets the preset VIN font requirement or not is detected.

If the font of at least one text pattern in the position area does not meet the preset VIN font requirement, the position area is not screened, and if the fonts of all the text patterns in the position area all meet the preset VIN font requirement, the position area can be screened.

The same is true for each of the other location areas.

In step S104, text characters corresponding to each of the plurality of continuous text patterns in the screened-out position region are identified based on the text recognition model that has been integrated in the electronic device.

The electronic device may train the text recognition model in the electronic device in advance and integrate the text recognition model in the electronic device.

Alternatively, the electronic device may download the text recognition model from another device in advance, and integrate the text recognition model in the electronic device. After the text recognition model is updated by other devices, the electronic device may also download a new text recognition model from other devices, and replace the text recognition model already integrated in the electronic device with the new text recognition model, so as to update the text recognition model.

In this way, when text characters corresponding to a plurality of continuous text patterns in the screened position area need to be identified, text characters corresponding to the plurality of continuous text patterns in the screened position area can be identified directly based on the text identification model integrated in the electronic device, so that the text identification model is not downloaded from other devices in real time, or partial images in the screened position area in the image to be identified are not uploaded to other devices and handed to other devices to identify text characters corresponding to the plurality of continuous text patterns in the screened position area and return the identified text characters, so that the electronic device does not receive text characters corresponding to the plurality of continuous text patterns in the screened position area identified by other devices and returned by other devices.

Therefore, the purposes of not depending on the network and saving network resources in the scene of identifying the text characters respectively corresponding to the continuous text patterns in the screened position area can be achieved.

Specifically, the electronic device may input the coordinate information of the image to be recognized and the screened position region into the text recognition model, so that the text recognition model recognizes text characters respectively corresponding to a plurality of continuous text patterns in the screened position region in the image to be recognized according to the coordinate information of the screened position region, and outputs text characters and the like respectively corresponding to a plurality of continuous text patterns in the screened position region in the image to be recognized, and then the electronic device may obtain text characters and the like respectively corresponding to a plurality of continuous text patterns in the screened position region in the image to be recognized output by the text recognition model.

The manner of training the text recognition model may specifically refer to the embodiment shown in fig. 2, and is not described in detail here.

Wherein the text recognition model integrated in the electronic device may be a vectorized lite-formatted text recognition model or the like.

In step S105, based on the preset VIN character configuration requirement, text characters corresponding to each text pattern in the target VIN patterns are determined in the recognized text characters.

In a general case, the VIN of the vehicle may include 17 characters, where the characters include numbers and english letters, and the 3 rd digit character in the VIN of the vehicle may be only a number, the 15 th to 17 th digit characters may be only a number, and the other digits of the vehicle may be a number or a letter, and different digits of the vehicle represent different meanings, and some special digits of the vehicle may be only specific letters or numbers.

Therefore, according to the above, the preset VIN character configuration requirement may be set in advance.

In this way, in this step, text characters corresponding to each text pattern in the target VIN patterns may be determined in the recognized text characters based on at least the preset VIN character configuration requirement.

For example, in one example, for any one of the screened-out location areas, it may be determined whether the number of text characters in the location area is 17.

In a case where the number of text characters in the position area is not 17, it is determined that the text characters in the position area are not text characters corresponding to respective text patterns in the target VIN pattern.

In the case where the number of text characters in the position area is 17, it is determined whether the characters at positions 3, 15, 16, and 17 among the text characters of the position area are numbers.

And in the case that at least one of the 3 rd, 15 th, 16 th and 17 th characters in the text characters of the position area is not a number, determining that the text characters in the position area are not the text characters corresponding to the text patterns in the target VIN patterns respectively.

In the case where the characters at positions 3, 15, 16, and 17 in the text characters of the position area are all numbers, it is determined whether a specific character in the text characters of the position area is a specific letter or number.

And under the condition that the specific bit characters in the text characters in the position area are not specific letters or numbers, determining that the text characters in the position area are not the text characters respectively corresponding to the text patterns in the target VIN patterns.

And determining that the text characters in the position area are respectively corresponding to each text pattern in the target VIN pattern when the specific characters in the text characters in the position area are specific letters or numbers.

The above operations may be performed on each of the other screened location areas in parallel or sequentially, until the text characters corresponding to each text pattern in the target VIN pattern are determined, and the process may be ended.

In the method, an image to be recognized is obtained, the image to be recognized at least comprises a target VIN pattern of a vehicle, and the target VIN pattern comprises a plurality of text patterns. At least one location area is determined in the image to be recognized on the basis of a location detection model which has been integrated in the electronic device. And in at least one position area, screening the position area in which the aspect ratio of the included text pattern meets the preset VIN aspect ratio requirement and the font of the included text pattern meets the preset VIN font requirement. And identifying text characters respectively corresponding to a plurality of continuous text patterns in the screened position area based on a text identification model integrated in the electronic equipment. And determining text characters respectively corresponding to each text pattern in the target VIN pattern in the recognized text characters based on the preset VIN character configuration requirement.

In the application, the position detection model is integrated in the electronic device in advance, so that at least one position area can be determined in the image to be recognized directly based on the position detection model integrated in the electronic device, other devices are not needed, data interaction with other devices is not needed, and the purposes of not depending on a network and saving network resources in a scene of determining at least one position area in the image to be recognized are achieved.

And based on the preset VIN aspect ratio requirement and the preset VIN font requirement, the position area where the text pattern in the non-VIN pattern is located can be filtered in at least one position area, the range of text character recognition based on the text recognition model can be narrowed, and the accuracy of the text character corresponding to each text pattern in the finally determined target VIN pattern can be indirectly improved.

Secondly, because the text recognition model is integrated in the electronic device in advance, text characters corresponding to the continuous text patterns in the screened position area can be recognized directly based on the text recognition model integrated in the electronic device, so that data interaction with other devices is not needed without the help of other devices, and the purposes of not depending on a network and saving network resources in a scene of recognizing the text characters corresponding to the continuous text patterns in the screened position area are achieved.

In addition, based on the preset VIN character configuration requirement, text characters corresponding to text patterns in non-VIN patterns can be filtered out from the recognized text characters, so that the accuracy of the text characters corresponding to each text pattern in the finally determined target VIN patterns can be improved.

In an embodiment of the present application, referring to fig. 2, a training manner of the text recognition model includes:

in step S201, a plurality of sample data sets are obtained, where the sample data sets include sample images, the sample images at least include sample VIN patterns of a vehicle, the sample VIN patterns include a plurality of sample text patterns, and the sample data sets further include: and marking text characters corresponding to the sample text patterns respectively.

The sample image may be a photographed image including a driving license of the vehicle, an image including an underside of a front windshield of the vehicle, an image including an engine surface of the vehicle, or the like.

The annotation text characters are not text characters embodied on an image, but exist in a form editable by an electronic device, and the like.

For example, the annotated text character may be a text character entered by a human worker in an input box displayed on a screen on the electronic device, or the like.

The sample text pattern in the sample VIN pattern in the sample image may represent a pattern of the sample VIN pattern in the sample image.

The sample text pattern corresponds to a label text character, and the contents of the sample text pattern and the label text character are the same, but are embodied in different forms.

This step can be referred to the embodiment shown in fig. 3, and will not be described in detail here.

In step S202, the model is trained using a plurality of sample data sets until parameters in the model converge, thereby obtaining a text recognition model.

In the present application, the model includes a CRNN (Convolutional Recurrent Neural Network) model or the like, or the model includes: and replacing LSTM (Long Short-Term Memory) in the CRNN model with GRU (Gated Recurrent Unit) to obtain a new CRNN model.

In the embodiment of the application, the training text recognition model uses the sample image at least having the sample VIN pattern of the vehicle, so that the trained text recognition model can focus on the VIN and has strong recognition capability on the VIN, and thus the accuracy of recognizing text characters corresponding to a plurality of text patterns in the VIN pattern based on the text recognition model can be improved.

In one embodiment of the present application, referring to fig. 3, step S201 includes:

in step S301, a first image is acquired, the first image including a first VIN pattern of the vehicle. The first VIN pattern comprises a plurality of first text patterns, a labeling area of the first VIN pattern in the first image is obtained, and labeling text characters corresponding to the first text patterns are obtained.

The first image may be a photographed image including a driving license of the vehicle, an image including an underside of a front windshield of the vehicle, an image including an engine surface of the vehicle, or the like.

The annotation text characters are not text characters embodied on an image, but exist in a form editable by an electronic device, and the like.

The annotation text character can be a text character or the like input by a worker in an input box displayed on a screen of the electronic equipment.

A first text pattern of the first VIN pattern in the first image may embody a pattern of the first text pattern in the first image.

The first text pattern corresponds to a label text character, and the contents of the first text pattern and the label text character are the same, but are embodied in different forms.

The annotated region of the first VIN pattern in the first image may be annotated on the first image by the worker.

In step S302, a plurality of first text patterns in the first VIN pattern are identified in the label area of the first image according to label text characters corresponding to the plurality of first text patterns, respectively.

In this application, OCR technology may be used to identify text patterns in the labeled area of the first image, and then filter, from the identified text patterns, text patterns corresponding to labeled text characters respectively corresponding to a plurality of first text patterns, and use the text patterns as a plurality of first text patterns in the first VIN pattern.

In step S303, a second VIN pattern different from the first VIN pattern is generated according to the plurality of first text patterns.

Text characters corresponding to the second text patterns in the second VIN patterns and text characters corresponding to the first text patterns in the first VIN patterns are not all the same, and/or the sequence between the second text patterns in the second VIN patterns is not all the same as the sequence between the first text patterns in the first VIN patterns.

In the present application, a text pattern is a state in which text characters are shown in the form of an image, and the text pattern is not editable, and the text pattern can be essentially regarded as an image.

Alternatively, for the same text, in an editable state of the electronic device, the text may be regarded as text characters, and in a state of being displayed as an image, the text may be regarded as a text pattern.

Thus, one text pattern corresponds to one text character, and a second VIN pattern meeting the preset VIN character configuration requirement can be generated according to the plurality of first text patterns. For example, 17 text patterns may be screened out from the plurality of first text patterns, and then the 17 text patterns are combined into a VIN pattern that meets the preset VIN character configuration requirement and serves as a second text pattern.

In step S304, a second image is generated according to the second VIN pattern and a preset background image.

In the application, a plurality of different preset background images can be prepared in advance, for any one preset background image, the second VIN pattern can be respectively superimposed on the preset background image to obtain a new image, the new image can be determined as the second image, and the same is true for each other preset background image. So that a plurality of different second images can be obtained.

Both the first image and the second image may be sample images.

In one embodiment, when the second image is generated according to the second VIN pattern and the preset background image, the intermediate image may be generated according to the second VIN pattern and the preset background image. For example, the second VIN pattern may be superimposed on a preset background image to obtain a new image, which is used as an intermediate image, and then noise data may be added to the intermediate image according to a random noise generation algorithm to obtain the second image.

The random noise generation algorithm in the present application includes a gaussian random noise generation algorithm, a berlin random noise generation algorithm, a salt and pepper random noise generation algorithm, and the like, and the random noise generation algorithm is not limited in the present application.

By way of example, the number of second images may be automatically increased, which in turn automatically increases the training data for training the text recognition model. And the second images are different from one another, and the noise data added by the random noise generation algorithm sometimes more conforms to the real scene, so that the possibility that the training data conforms to the real scene can be improved, and the generalization capability of the trained text recognition model can be further improved. And on the premise that the first image is available, the whole process of manual work does not need to participate in the process of generating the second image based on the first image, so that the labor cost can be reduced in the scene of increasing the training data.

In one embodiment, when the second image is generated according to the second VIN pattern and the preset background image, the intermediate image may be generated according to the second VIN pattern and the preset background image. For example, the second VIN pattern may be superimposed on a preset background image to obtain a new image, the new image is used as an intermediate image, then the intermediate image may be subjected to blurring processing to obtain a blurred image, the blurring degree of the blurred image may be uniform or non-uniform, the blurred images may be multiple, and the blurring degrees of different blurred images are different, or the blurring degrees of the same position in different blurred images are different, and then the second image may be obtained according to the blurred image, for example, the blurred image is used as the second image.

The way of blurring the image can be referred to the way already existing in the market, and will not be described in detail here.

By way of example, the number of second images may be automatically increased, which in turn automatically increases the training data for training the text recognition model. And the second images are different from each other, and the generated blurred image may sometimes better conform to a real scene, for example, blurred states in the blurred image are non-uniformly distributed, and the distribution may be more random and natural, so that the possibility that training data conforms to the real scene may be improved, and the generalization capability of the trained text recognition model may be further improved. For example, the generalization ability of the trained text recognition model in a real virtual scene after being online can be improved. And on the premise that the first image is available, the whole process of manual work does not need to participate in the process of generating the virtual second image based on the first image, so that the labor cost can be reduced in the scene of increasing the training data.

In step S305, a sample data set is generated according to the labeled text characters respectively corresponding to the first image and the plurality of first text patterns; acquiring text characters respectively corresponding to a plurality of second text patterns in the second VIN pattern; and generating a sample data set according to the text characters respectively corresponding to the second image and the plurality of second text patterns.

The first image is manually collected by a worker in the electronic device, and the annotated text characters corresponding to the first text patterns are manually annotated by the worker in the electronic device, so that the electronic device has the first image and the annotated text characters corresponding to the first text patterns, and thus, the annotated text characters corresponding to the first image and the first text patterns can be combined into a sample data set.

In an embodiment, the number of the first images is multiple, and each first image is different, then for any one first image, the first image and the annotated text characters corresponding to the multiple first text patterns included in the first VIN pattern in the first image may be combined into one sample data set, and for each other first image, the above operations are also performed, so as to obtain multiple sample data sets including different first images.

The second image is generated by the electronic device according to the first VIN pattern and the preset background image, and the second VIN pattern is generated according to a plurality of first text patterns in the first VIN pattern, so that text characters corresponding to each second text pattern in the second VIN pattern can be determined according to the first text pattern used for generating the second VIN pattern, for example, text characters corresponding to each first text pattern used for generating the second VIN pattern can be obtained, and then text characters corresponding to each second text pattern in the second VIN pattern can be determined according to the obtained text characters, so that the text characters corresponding to each second text pattern in the second VIN pattern and the second text patterns can be combined into the sample data set.

In an embodiment, the number of the second images is multiple, and each of the second images is different, then for any one of the second images, the annotated text characters corresponding to the second text patterns included in the second VIN pattern in the second image and the second image may form one sample data set, and for each other second image, the above operations are also performed, so as to obtain multiple sample data sets including different second images.

In the application, a large number of sample images are generated in an automatic mode, and the label text characters corresponding to the sample text patterns in the sample VIN patterns in the sample images are generated in an automatic mode, so that the label text characters corresponding to the sample text patterns in the sample VIN patterns in the large number of sample images and the manual label sample images can be prevented from being manually collected, the efficiency of collecting the sample data sets can be improved, and the labor cost can be reduced.

It is noted that, for simplicity of explanation, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will appreciate that the present application is not limited by the order of acts, as some steps may, in accordance with the present application, occur in other orders and concurrently. Further, those skilled in the art will also appreciate that the embodiments described in the specification are exemplary and that no action is necessarily required in this application.

Referring to fig. 4, a block diagram of an image processing apparatus according to the present application is shown, and the apparatus is applied to a client, and specifically includes the following modules:

the identification method comprises a first obtaining module 11, a second obtaining module, a third obtaining module and a fourth obtaining module, wherein the first obtaining module is used for obtaining an image to be identified, the image to be identified at least comprises a target VIN pattern of a vehicle, and the target VIN pattern comprises a plurality of text patterns;

a first determining module 12, configured to determine at least one location area in the image to be recognized based on a location detection model that is integrated in the electronic device, where each location area includes a plurality of text patterns that are continuous; the plurality of text patterns in one of the at least one location area comprise respective ones of the target VIN patterns;

the screening module 13 is configured to screen, in at least one location area, a location area where an aspect ratio of the included text pattern meets a preset VIN aspect ratio requirement and a font of the included text pattern meets a preset VIN font requirement; a plurality of text patterns in one of the screened location areas comprise each text pattern in the target VIN patterns;

the recognition module 14 is used for recognizing text characters corresponding to a plurality of continuous text patterns in the screened position areas respectively based on a text recognition model integrated in the electronic equipment;

and a second determining module 15, configured to determine, based on a preset VIN character configuration requirement, text characters corresponding to each text pattern in the target VIN patterns in the recognized text characters.

In an optional implementation, the apparatus further comprises:

a second obtaining module, configured to obtain multiple sample data sets, where each sample data set includes a sample image, the sample image at least includes a sample VIN pattern of a vehicle, and the sample VIN pattern includes multiple sample text patterns, and the sample data set further includes: marking text characters corresponding to the sample text patterns respectively;

and the training module is used for training the model by using a plurality of sample data sets until parameters in the model are converged, so that the text recognition model is obtained.

In an optional implementation manner, the second obtaining module includes:

the vehicle identification device comprises a first acquisition unit, a second acquisition unit and a control unit, wherein the first acquisition unit is used for acquiring a first image, and the first image comprises a first VIN pattern of a vehicle; the first VIN pattern comprises a plurality of first text patterns, a labeling area of the first VIN pattern in the first image is obtained, and labeling text characters corresponding to the first text patterns are obtained;

the identification unit is used for identifying a plurality of first text patterns in the first VIN pattern in the labeling area of the first image according to labeling text characters respectively corresponding to the plurality of first text patterns;

a first generating unit, configured to generate a second VIN pattern different from the first VIN pattern according to the plurality of first text patterns;

the second generating unit is used for generating a second image according to the second VIN pattern and a preset background image;

a third generating unit, configured to generate a sample data set according to the first image and the labeled text characters corresponding to the plurality of first text patterns, respectively;

a second obtaining unit, configured to obtain text characters corresponding to a plurality of second text patterns in the second VIN pattern;

and the fourth generating unit is used for generating a sample data set according to the second image and the text characters respectively corresponding to the plurality of second text patterns.

In an optional implementation manner, the second generating unit includes:

the first generating subunit is used for generating an intermediate image according to the second VIN pattern and a preset background image;

and the adding subunit is used for adding noise data in the intermediate image according to a random noise generation algorithm to obtain the second image.

In an optional implementation manner, the second generating unit includes:

the second generating subunit is used for generating an intermediate image according to the second VIN pattern and a preset background image;

a blurring subunit, configured to perform blurring processing on the intermediate image to obtain a blurred image;

and the obtaining subunit is used for obtaining the second image according to the blurring image.

In a third aspect, the present application shows an electronic device comprising:

a processor;

a memory for storing processor-executable instructions;

wherein the processor is configured to perform the image processing method according to the first aspect.

In a fourth aspect, the present application shows a non-transitory computer readable storage medium having instructions which, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method according to the first aspect.

In a fifth aspect, the present application shows a computer program product, in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method according to the first aspect.

The technical scheme provided by the application can comprise the following beneficial effects:

in the method, an image to be recognized is obtained, the image to be recognized at least comprises a target VIN pattern of a vehicle, and the target VIN pattern comprises a plurality of text patterns. At least one location area is determined in the image to be recognized on the basis of a location detection model which has been integrated in the electronic device. And in at least one position area, screening the position area in which the aspect ratio of the included text pattern meets the preset VIN aspect ratio requirement and the font of the included text pattern meets the preset VIN font requirement. And identifying text characters respectively corresponding to a plurality of continuous text patterns in the screened position area based on a text identification model integrated in the electronic equipment. And determining text characters respectively corresponding to each text pattern in the target VIN pattern in the recognized text characters based on the preset VIN character configuration requirement.

In the application, the position detection model is integrated in the electronic device in advance, so that at least one position area can be determined in the image to be recognized directly based on the position detection model integrated in the electronic device, other devices are not needed, data interaction with other devices is not needed, and the purposes of not depending on a network and saving network resources in a scene of determining at least one position area in the image to be recognized are achieved.

And based on the preset VIN aspect ratio requirement and the preset VIN font requirement, the position area where the text pattern in the non-VIN pattern is located can be filtered in at least one position area, the range of text character recognition based on the text recognition model can be narrowed, and the accuracy of the text character corresponding to each text pattern in the finally determined target VIN pattern can be indirectly improved.

Secondly, because the text recognition model is integrated in the electronic device in advance, text characters corresponding to the continuous text patterns in the screened position area can be recognized directly based on the text recognition model integrated in the electronic device, so that data interaction with other devices is not needed without the help of other devices, and the purposes of not depending on a network and saving network resources in a scene of recognizing the text characters corresponding to the continuous text patterns in the screened position area are achieved.

In addition, based on the preset VIN character configuration requirement, text characters corresponding to text patterns in non-VIN patterns can be filtered out from the recognized text characters, so that the accuracy of the text characters corresponding to each text pattern in the finally determined target VIN patterns can be improved.

For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.

Optionally, an embodiment of the present invention further provides an electronic device, including: the processor, the memory, and the computer program stored in the memory and capable of running on the processor, when being executed by the processor, implement each process of the above-mentioned image processing method embodiment, and can achieve the same technical effect, and for avoiding repetition, details are not repeated here.

The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.

Fig. 5 is a block diagram of an electronic device 800 shown in the present application. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.

Referring to fig. 5, electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.

The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.

The memory 804 is configured to store various types of data to support operation at the device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, images, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.

The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.

The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.

The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.

The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.

The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.

The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives broadcast signals or broadcast operation information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.

In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.

In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the electronic device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.

Fig. 6 is a block diagram of an electronic device 1900 shown in the present application. For example, the electronic device 1900 may be provided as a server.

Referring to fig. 6, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.

The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.

It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.

While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.

It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.

The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.

The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

26页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种适航领域历史数据补录系统及方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!