Intelligent text error correction method and device, electronic equipment and readable storage medium

文档序号:1215827 发布日期:2020-09-04 浏览:13次 中文

阅读说明:本技术 智能化文本纠错方法、装置、电子设备及可读存储介质 (Intelligent text error correction method and device, electronic equipment and readable storage medium ) 是由 谢静文 阮晓雯 徐亮 于 2020-04-23 设计创作,主要内容包括:本发明涉及人工智能技术,可应用于智慧城市领域中,揭露了一种智能化文本纠错方法,包括:利用未标记文本集对原始文本纠错模型进行非监督训练得到初级文本纠错模型,利用已标记文本集对所述初级文本纠错模型进行监督训练得到标准文本纠错模型,对纠错文本执行文本遮蔽操作得到已遮蔽文本,将所述已遮蔽文本输入至所述标准文本纠错模型中,得到预测文本及所述预测文本的预测概率值,在所述预测文本与所述待纠错文本不相同且所述预测概率值大于所述预设概率值时,根据所述预测文本对所述待纠错文本进行文本纠错。本发明还提出一种智能化文本纠错装置、电子设备以及一种计算机可读存储介质。本发明可以解决在不过度消耗人工和计算机资源的前提下提高文本纠错效果的问题。此外,本发明还涉及区块链技术,所述文本、文本集可存储于区块链中。(The invention relates to an artificial intelligence technology, can be applied to the field of smart cities, and discloses an intelligent text error correction method, which comprises the following steps: the method comprises the steps of carrying out non-supervised training on an original text error correction model by utilizing an unmarked text set to obtain a primary text error correction model, carrying out supervised training on the primary text error correction model by utilizing a marked text set to obtain a standard text error correction model, carrying out text masking operation on an error correction text to obtain a masked text, inputting the masked text into the standard text error correction model to obtain a predicted text and a predicted probability value of the predicted text, and carrying out text error correction on the text to be corrected according to the predicted text when the predicted text is different from the text to be corrected and the predicted probability value is greater than the preset probability value. The invention also provides an intelligent text error correction device, electronic equipment and a computer readable storage medium. The invention can solve the problem of improving the text error correction effect on the premise of not excessively consuming manpower and computer resources. In addition, the invention also relates to a block chain technology, and the text set can be stored in the block chain.)

1. An intelligent text correction method, characterized in that the method comprises:

carrying out unsupervised training on the pre-constructed original text error correction model by using an unmarked text set to obtain a primary text error correction model;

carrying out supervision training on the primary text error correction model by utilizing a marked text set to obtain a standard text error correction model;

acquiring a text to be corrected, performing text masking operation on the text to be corrected to obtain one or more groups of masked texts, and inputting the masked texts into the standard text correction model to obtain a predicted text and a predicted probability value of the predicted text;

and when the predicted text is different from the text to be corrected and the predicted probability value is greater than the preset probability value, performing text correction on the text to be corrected according to the predicted text.

2. The intelligent text error correction method of claim 1, wherein the unsupervised training of the pre-constructed original text error correction model with the unlabeled text set to obtain the primary text error correction model comprises:

converting the unmarked text set into a text vector set according to a pre-constructed text vector conversion method;

carrying out position marking on the unmarked text set to obtain a text position set;

converting the text position set into a position vector set according to the text vector conversion method;

and inputting the position vector set and the text vector set into the original text error correction model for unsupervised training until the training times of the unsupervised training meet the preset training requirement, and quitting the training to obtain a primary text error correction model.

3. The intelligent text error correction method of claim 2, wherein the converting the set of unlabeled text into a set of text vectors according to a pre-constructed text vector conversion method comprises:

converting the unmarked text set into a text vector set by adopting a conversion method as follows:

Figure FDA0002463279610000011

where ω represents a path value of a text decision tree based on the text vector conversion method, j represents an index of the unlabeled text set and is a positive integer, and ζ (ω, j) represents a text vector of a jth unlabeled text of the unlabeled text set under the path ω,representing the Huffman code corresponding to the jth node in the path omega, wherein the path omega is a positive integer, and theta isAn iteration factor of the text vector conversion method, sigma represents a sigmoid function, XωIs the set of unlabeled text.

4. The intelligent text error correction method of claim 2, wherein the inputting the set of location vectors and the set of text vectors into the original text error correction model for unsupervised training comprises:

dividing the text vector set into a plurality of word vector sets by taking vector data as a dividing unit;

dividing the text vector set into a plurality of paragraph vector sets by using the vector behavior division unit;

and calculating the weight relationship among each group of word vector sets, each group of paragraph vector sets and the position vector set, and updating the internal parameters of the original text error correction model according to the weight relationship.

5. The intelligent text error correction method of claim 4, wherein the calculating the weight relationship of each group of the word vector set, each group of the paragraph vector set and the position vector set comprises:

sequentially selecting any one vector in the word vector set, the paragraph vector set and the position vector set as a target vector;

executing the text masking operation on the target vector to obtain a masking vector;

calculating the weights of the shielding vector and other vectors in the word vector set, the paragraph vector set and the position vector set to obtain a weight set, and performing weighted fusion on the weight set to obtain the weight relationship.

6. The intelligent text correction method according to any one of claims 2 to 5, wherein the supervised training of the primary text correction model by using the labeled text set to obtain a standard text correction model comprises:

extracting labels of the marked texts from the marked text set to obtain a real label set;

converting the marked text set into a marked text vector set according to the text vector conversion method;

inputting the marked text vector set into the primary text error correction model for supervision training to obtain a prediction label set;

and if the error range of the predicted label set and the real label set is larger than a preset error, continuing the supervised training until the error range of the predicted label set and the real label set is smaller than the preset error, and quitting the supervised training to obtain the standard text error correction model.

7. The intelligent text correction method according to any one of claims 1 to 5, further comprising:

when the predicted text is the same as the text to be corrected, re-receiving the text to be corrected; or the like, or, alternatively,

and when the predicted text is different from the text to be corrected and the predicted probability value is smaller than a preset probability value, re-receiving the text to be corrected.

8. An intelligent text correction apparatus, the apparatus comprising:

the unsupervised training module is used for performing label calculation on the current information set according to the corresponding relation between the historical information set and the historical label set to obtain the current label set;

the supervision training module is used for carrying out label adjustment on the current label set according to a preset adjustment factor to obtain a standard label set;

the predictive text module is used for extracting label features from the standard label set according to a convolutional neural network feature extraction technology to obtain a feature extraction set;

and the text error correction module is used for carrying out classification prediction on the feature extraction set as an input value of the trained classification neural network to obtain an information classification result.

9. An electronic device, characterized in that the electronic device comprises:

at least one processor; and the number of the first and second groups,

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the intelligent text correction method of any one of claims 1 to 7.

10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the intelligent text correction method according to any one of claims 1 to 7.

Technical Field

The invention relates to the technical field of artificial intelligence, in particular to an intelligent text error correction method, an intelligent text error correction device, electronic equipment and a readable storage medium.

Background

The text error correction has wide application prospect, such as intelligent error correction and prompting of complicated characters in the medical field, acceleration of the working efficiency of a prescription printer, error correction of spelled text in office chatting, prevention of low-level errors and the like.

The existing technology about text error correction mainly has two types, namely, a traditional text error correction model obtained by using a distance calculation method; and secondly, training the obtained deep learning text error correction model by using a large corpus. The two methods can finish text error correction to a certain extent, but the deep learning text error correction model needs a large amount of corpora in the training stage, and consumes manpower and computer resources regardless of the process from corpus collection, cleaning to subsequent training, and the traditional text error correction model has poor robustness, weak text error correction capability and unsatisfactory effect for certain specific scenes, in particular to the text in the medical field.

Disclosure of Invention

The invention provides an intelligent text error correction method, an intelligent text error correction device, electronic equipment and a computer readable storage medium, and mainly aims to solve the problem of improving the text error correction effect on the premise of not excessively consuming manpower and computer resources.

In order to achieve the above object, the present invention provides an intelligent text error correction method, which comprises:

carrying out unsupervised training on the pre-constructed original text error correction model by using an unmarked text set to obtain a primary text error correction model;

carrying out supervision training on the primary text error correction model by utilizing a marked text set to obtain a standard text error correction model;

acquiring a text to be corrected, performing text masking operation on the text to be corrected to obtain one or more groups of masked texts, and inputting the masked texts into the standard text correction model to obtain a predicted text and a predicted probability value of the predicted text;

and when the predicted text is different from the text to be corrected and the predicted probability value is greater than the preset probability value, performing text correction on the text to be corrected according to the predicted text.

Optionally, the performing unsupervised training on the pre-constructed original text error correction model by using the unlabeled text set to obtain a primary text error correction model includes:

converting the unmarked text set into a text vector set according to a pre-constructed text vector conversion method;

carrying out position marking on the unmarked text set to obtain a text position set;

converting the text position set into a position vector set according to the text vector conversion method;

and inputting the position vector set and the text vector set into the original text error correction model for unsupervised training until the training times of the unsupervised training meet the preset training requirement, and quitting the training to obtain a primary text error correction model.

Optionally, the converting the unlabeled text set into a text vector set according to a pre-constructed text vector conversion method includes:

converting the unmarked text set into a text vector set by adopting a conversion method as follows:

where ω denotes a path value of a text decision tree based on the text vector conversion method, and j denotes the unlabeled text setIndexed, and a positive integer, ζ (ω, j) represents a text vector of the jth unlabeled text of the unlabeled text set under path ω,representing the Huffman codes corresponding to the jth node in a path omega, wherein the path omega is a positive integer, theta is an iteration factor of the text vector conversion method, sigma represents a sigmoid function, and XωIs the set of unlabeled text.

Optionally, the inputting the position vector set and the text vector set to the original text error correction model for unsupervised training includes:

dividing the text vector set into a plurality of word vector sets by taking vector data as a dividing unit;

dividing the text vector set into a plurality of paragraph vector sets by using the vector behavior division unit;

and calculating the weight relationship among each group of word vector sets, each group of paragraph vector sets and the position vector set, and updating the internal parameters of the original text error correction model according to the weight relationship.

Optionally, the calculating a weight relationship between each group of the word vector set, each group of the paragraph vector set, and the position vector set includes:

sequentially selecting any one vector in the word vector set, the paragraph vector set and the position vector set as a target vector;

executing the text masking operation on the target vector to obtain a masking vector;

calculating the weights of the shielding vector and other vectors in the word vector set, the paragraph vector set and the position vector set to obtain a weight set, and performing weighted fusion on the weight set to obtain the weight relationship.

Optionally, the performing supervised training on the primary text correction model by using the labeled text set to obtain a standard text correction model includes:

extracting labels of the marked texts from the marked text set to obtain a real label set;

converting the marked text set into a marked text vector set according to the text vector conversion method;

inputting the marked text vector set into the primary text error correction model for supervision training to obtain a prediction label set;

and if the error range of the predicted label set and the real label set is larger than a preset error, continuing the supervised training until the error range of the predicted label set and the real label set is smaller than the preset error, and quitting the supervised training to obtain the standard text error correction model.

Optionally, the method further comprises:

when the predicted text is the same as the text to be corrected, re-receiving the text to be corrected;

and when the predicted text is different from the text to be corrected and the predicted probability value is smaller than a preset probability value, re-receiving the text to be corrected.

In order to solve the above problem, the present invention further provides an intelligent text error correction apparatus, comprising:

the unsupervised training module is used for performing label calculation on the current information set according to the corresponding relation between the historical information set and the historical label set to obtain the current label set;

the supervision training module is used for carrying out label adjustment on the current label set according to a preset adjustment factor to obtain a standard label set;

the predictive text module is used for extracting label features from the standard label set according to a convolutional neural network feature extraction technology to obtain a feature extraction set;

and the text error correction module is used for carrying out classification prediction on the feature extraction set as an input value of the trained classification neural network to obtain an information classification result.

In order to solve the above problem, the present invention also provides an electronic device, including:

a memory storing at least one instruction; and

and the processor executes the instructions stored in the memory to realize the intelligent text error correction method.

In order to solve the above problem, the present invention further provides a computer-readable storage medium, wherein at least one instruction is stored in the computer-readable storage medium, and the at least one instruction is executed by a processor in an electronic device to implement the intelligent text error correction method according to any one of the above items.

The invention respectively carries out unsupervised training and supervised training on the pre-constructed original text error correction model by utilizing the unlabeled text set, and predicts the text through the text masking operation and the trained model. Therefore, the intelligent text error correction method, the intelligent text error correction device, the electronic equipment and the computer readable storage medium can solve the problem of improving the text error correction effect on the premise of not excessively consuming manpower and computer resources.

Drawings

Fig. 1 is a schematic flowchart of an intelligent text error correction method according to an embodiment of the present invention;

fig. 2 is a detailed flowchart illustrating the step S1 in the intelligent text error correction method according to an embodiment of the present invention;

fig. 3 is a detailed flowchart illustrating the step S2 in the intelligent text error correction method according to an embodiment of the present invention;

fig. 4 is a schematic block diagram of an intelligent text error correction method according to an embodiment of the present invention;

fig. 5 is a schematic diagram of an internal structure of an electronic device of an intelligent text error correction method according to an embodiment of the present invention;

the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.

Detailed Description

It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.

The invention provides an intelligent text error correction method. Fig. 1 is a schematic flow chart of an intelligent text error correction method according to an embodiment of the present invention. The method may be performed by an apparatus, which may be implemented by software and/or hardware.

In this embodiment, the intelligent text correction method includes:

s1, carrying out unsupervised training on the pre-constructed original text error correction model by using the unlabeled text set to obtain a primary text error correction model.

The text error correction has wide application scenes, particularly in the medical field, word errors occur in a plurality of medical books and prescriptions due to complex word usage, for example, a patient has seborrheic dermatitis due to strenuous tension, a doctor sets a prescription of the compound ketoconazole lotion for hair, a prescription printer prints the compound ketoconazole lotion for hair into the compound ketoconazole lotion for hair due to errors, and intelligent error correction can be performed by using the technical scheme of the invention at the moment.

The unlabelled text set is a text set without labels, such as the compound ketoconazole hair lotion, the compound ketoconazole hair lotion and the like which are actually unlabelled texts, and the simple understanding is that the compound ketoconazole hair lotion is a correct composition form, while the compound ketoconazole hair lotion is an incorrect composition form, but no label is given to whether the compound ketoconazole hair lotion is correct or not.

In detail, the step S1 can be shown in the detailed flow chart of fig. 2, and includes:

s11, converting the unmarked text set into a text vector set according to a pre-constructed text vector conversion method;

s12, carrying out position labeling on the unmarked text set to obtain a text position set, and converting the text position set into a position vector set according to the text vector conversion method;

and S13, inputting the position vector set and the text vector set into the original text error correction model for unsupervised training until the training times of the unsupervised training meet the preset training requirement, and quitting the training to obtain a primary text error correction model.

Further, the text vector conversion method may adopt one-hot Word vector conversion and Word2Vec Word vector conversion, for example.

The preferred embodiment of the present invention employs Word2Vec Word vector transformation, which comprises:

vector converting the set of unlabeled text or the set of text positions as follows:

where ω represents a decision tree path value on which the Word2Vec Word vector conversion depends, j represents an index of the unmarked text set and is a positive integer, ζ (ω, j) represents a text vector of the unmarked text set or a position vector of the text position of the unmarked text set under path ω,representing the Huffman codes corresponding to the jth node in a path omega, wherein the path omega is a positive integer, theta is an iteration factor of the Word2Vec Word vector conversion, sigma represents a sigmoid function, and XωIs the set of unlabeled text or the set of text positions.

The original correct compound ketoconazole lotion for hair is converted into text vectors and position vectors through the vector conversion, wherein the text vectors are [1.6,1.23,6.91,9.4,12.7,0.3,17.03,2.81,1.04], and the position vectors are [0.11,1.09,3.59,0.4,0.75,2.1,5.1,2.09,3.77]

Preferably, the original text error correction model is obtained by modifying a BERT model (referred to as BERT for short).

In detail, the inputting the position vector set and the text vector set to the original text error correction model for unsupervised training includes: dividing the text vector set into a plurality of groups of word vector sets by taking data in the vector set as a dividing unit, dividing the text vector set into a plurality of groups of paragraph vector sets by taking lines as a dividing unit, calculating the weight relationship among each group of word vector sets, each group of paragraph vector sets and the position vector set, and updating the internal parameters of the original text error correction model according to the weight relationship.

The text vectors of the compound ketoconazole hair lotion are [1.6,1.23,6.91,9.4,12.7,0.3,17.03,2.81 and 1.04]]Position vector of [0.11,1.09,3.59,0.4,0.75,2.1,5.1,2.09,3.77]If the text vector is divided into units of data, then [0.75,2.1 ] can be obtained]、[1.6,2.81,1.04]、[0.3,17.03,2.81,1.04]A plurality of word vector sets in equal forms, if the text vector of the compound ketoconazole hair lotion is a plurality of lines, for example, the expression form isThen divide by line to get [1.6,1.23 ]]、[6.91,9.4]Two sets of paragraph vectors.

Further, the calculating a weight relationship of each group of the word vector set, each group of the paragraph vector set, and the position vector set includes: randomly selecting vectors in any one vector set of the word vector set, the paragraph vector set and the position vector set as target vectors, obtaining a masking vector by using a text masking operation on the target vectors, calculating the weights of the masking vector and the vectors in each vector set to obtain a weight set, and performing weighted fusion on the weight sets to obtain the weight relationship.

For example, the compound ketoconazole lotion has a word vector of [0.3,17.03,2.81,1.04] as the target vector, the text masking operation is to mask any data, and if the text masking operation of [0.3,17.03,2.81,1.04] is changed into [0.3, [ 2.81, ] then the weight set is obtained by calculating the weights of [0.3, [ 2.81, ] and other word vectors, paragraph vectors and position vectors.

In detail, a similarity calculation method may be used to calculate the weight of the occlusion vector and the vector in each vector set, and the similarity calculation method may use a currently disclosed cosine calculation method, euclidean distance method, or the like.

The weighted fusion can adopt a fusion method in a gaussian distribution form, a linear mode (such as a linear function) and a nonlinear mode (such as a quadratic function), for example, the weight set is [0.101,3.091,2.057,0.4,0.756,2.71,5.103 ], the linear function is utilized to perform fusion to obtain a k value and a b value of the linear function, and then the k value and the b value are used as internal parameters of the original text error correction model.

And S2, carrying out supervision training on the primary text error correction model by using the marked text set to obtain a standard text error correction model.

The marked text set corresponds to the unmarked text set, and the marked text set is a text set added with labels, as described in S1, the compound ketoconazole lotion, and the like can be unmarked texts, even if the compound ketoconazole lotion is wrongly written, but the marked text set adds labels of correct writing to the compound ketoconazole lotion, and generally does not use texts with wrong writing.

The supervised training is the same as the unsupervised training in basic form, and in detail, the step of performing supervised training on the primary text correction model by using the labeled text set to obtain the standard text correction model is shown in a detailed flow diagram of step S2 in fig. 3, and includes:

s21, extracting the labels of the marked texts from the marked text set to obtain a real label set;

s22, converting the marked text set into a marked text vector set according to the text vector conversion method;

s23, inputting the marked text vector set to the primary text error correction model for supervised training to obtain a prediction label set;

s24, judging whether the error range of the predicted label set and the real label set is larger than a preset error or not, if so, continuing the supervised training until the error range of the predicted label set and the real label set is smaller than the preset error, and quitting the supervised training to obtain a standard text error correction model.

S3, obtaining a text to be corrected, carrying out text masking operation on the text to be corrected to obtain one or more groups of masked texts, and inputting the masked texts into the standard text correction model to obtain a predicted text and a prediction probability value of the predicted text.

The compound lotion for dispensing the kerconazole is the text to be corrected if the compound lotion for dispensing the kerconazole is printed into the compound lotion for dispensing the kerconazole by mistake by the printer with the prescription, and the text is masked by the compound lotion for dispensing the kerconazole, so that the masked text such as the compound lotion for dispensing the kerconazole, the compound lotion for dispensing the keronazol and the like can be obtained.

In detail, inputting the masked text into the standard text error correction model to obtain a predicted text and a predicted probability value of the predicted text, including: and converting the shielded text into a shielded vector according to the text vector conversion method, and inputting the shielded vector into the standard text error correction model to obtain a predicted text and a prediction probability value of the predicted text.

And S4, judging whether the predicted text is the same as the text to be corrected.

When the prediction text "compound x azole hair lotion" is predicted as described above, the prediction text "compound ketoconazole hair lotion" is obtained, and it is determined whether the prediction text "compound ketoconazole hair lotion" is the same as the text "compound ketoconazole hair lotion" to be corrected.

S5, if the predicted text is the same as the text to be corrected, text correction is not needed to be carried out on the text to be corrected, and the text to be corrected is received again.

If the prediction text "compound tungconazole lotion for hair" is the same as the text "compound tungconazole lotion for hair" to be corrected, it proves that the error of the prescription printer is not found.

And S6, if the predicted text is different from the text to be corrected, judging whether the predicted probability value is greater than a preset probability value, and if the predicted probability value is less than the preset probability value, not needing to correct the text of the text to be corrected and re-receiving the text to be corrected.

If the prediction text "compound ketoconazole lotion for hair" is different from the text "compound ketoconazole lotion for hair" to be corrected, and the prediction probability of the prediction text "compound ketoconazole lotion for hair" is 97%.

If the preset probability value is 99%, the accuracy of the predicted text is not up to the requirement, and therefore text error correction is not performed on the text to be corrected.

And S7, if the prediction probability value is larger than the preset probability value, carrying out text error correction on the text to be corrected according to the prediction text.

And if the preset probability value is 96%, replacing the text to be corrected with the prediction text 'compound ketoconazole lotion for hair' to finish text correction.

It should be emphasized that, in order to further ensure the privacy and security of the data, the text and the text set may also be stored in a node of a block chain.

The scheme can be applied to the sub-fields of intelligent medical treatment, intelligent education and the like in the field of the smart city, and accordingly the construction of the smart city is promoted.

Fig. 4 is a functional block diagram of the intelligent text error correction apparatus according to the present invention.

The intelligent text correction apparatus 100 of the present invention can be installed in an electronic device. Depending on the implemented functions, the intelligent text correction apparatus may include an unsupervised training module 101, a supervised training module 102, a predicted text module 103, and a text correction module 104. A module according to the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.

In the present embodiment, the functions regarding the respective modules/units are as follows:

the unsupervised training module 101 is configured to perform label calculation on a current information set according to a corresponding relationship between a historical information set and a historical label set to obtain a current label set;

the supervised training module 102 is configured to perform label adjustment on the current label set according to a preset adjustment factor to obtain a standard label set;

the predictive text module 103 is configured to extract tag features from the standard tag set according to a convolutional neural network feature extraction technique to obtain a feature extraction set;

and the text error correction module 104 is configured to perform classification prediction on the feature extraction set as an input value of the trained classification neural network to obtain an information classification result.

Fig. 5 is a schematic structural diagram of an electronic device for implementing an intelligent text error correction method according to the present invention.

The electronic device 1 may comprise a processor 10, a memory 11 and a bus, and may further comprise a computer program, such as an intelligent text correction program 12, stored in the memory 11 and executable on the processor 10.

The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only to store application software installed in the electronic device 1 and various types of data, such as codes for intelligent text correction, etc., but also to temporarily store data that has been output or is to be output.

The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the whole electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device 1 by running or executing programs or modules (e.g., performing intelligent text error correction, etc.) stored in the memory 11 and calling data stored in the memory 11.

The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.

Fig. 5 only shows an electronic device with components, and it will be understood by a person skilled in the art that the structure shown in fig. 5 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.

For example, although not shown, the electronic device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so as to implement functions of charge management, discharge management, power consumption management, and the like through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.

Further, the electronic device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the electronic device 1 and other electronic devices.

Optionally, the electronic device 1 may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic device 1 and for displaying a visualized user interface, among other things.

It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.

The intelligent text correction 12 stored in the memory 11 of the electronic device 1 is a combination of instructions that, when executed in the processor 10, enable:

and performing unsupervised training on the pre-constructed original text error correction model by using the unlabeled text set to obtain a primary text error correction model.

And carrying out supervision training on the primary text error correction model by utilizing the marked text set to obtain a standard text error correction model.

Acquiring a text to be corrected, performing text masking operation on the text to be corrected to obtain one or more groups of masked texts, and inputting the masked texts into the standard text correction model to obtain a predicted text and a predicted probability value of the predicted text.

And when the predicted text is different from the text to be corrected and the predicted probability value is greater than the preset probability value, performing text correction on the text to be corrected according to the predicted text.

Specifically, the processor 10 implements the above instruction in the following manner:

step one, carrying out unsupervised training on a pre-constructed original text error correction model by using an unmarked text set to obtain a primary text error correction model.

The text error correction has wide application scenes, particularly in the medical field, word errors occur in a plurality of medical books and prescriptions due to complex word usage, for example, a patient has seborrheic dermatitis due to strenuous tension, a doctor sets a prescription of the compound ketoconazole lotion for hair, a prescription printer prints the compound ketoconazole lotion for hair into the compound ketoconazole lotion for hair due to errors, and intelligent error correction can be performed by using the technical scheme of the invention at the moment.

The unlabelled text set is a text set without labels, such as the compound ketoconazole hair lotion, the compound ketoconazole hair lotion and the like which are actually unlabelled texts, and the simple understanding is that the compound ketoconazole hair lotion is a correct composition form, while the compound ketoconazole hair lotion is an incorrect composition form, but no label is given to whether the compound ketoconazole hair lotion is correct or not.

In detail, the first step comprises:

converting the unmarked text set into a text vector set according to a pre-constructed text vector conversion method;

carrying out position marking on the unmarked text set to obtain a text position set, and converting the text position set into a position vector set according to the text vector conversion method;

and inputting the position vector set and the text vector set into the original text error correction model for unsupervised training until the training times of the unsupervised training meet the preset training requirement, and quitting the training to obtain a primary text error correction model.

Further, the text vector conversion method may adopt one-hot Word vector conversion and Word2Vec Word vector conversion, for example.

The preferred embodiment of the present invention employs Word2Vec Word vector transformation, which comprises:

vector converting the set of unlabeled text or the set of text positions as follows:

where ω represents a path value of a decision tree on which the Word2Vec Word vector conversion depends, j represents an index of the unmarked text set and is a positive integer, and ζ (ω, j) represents a text vector of the jth unmarked text of the unmarked text set or a position vector of the jth text position of the text position set under path ω,representing the Huffman codes corresponding to the jth node in a path omega, wherein the path omega is a positive integer, theta is an iteration factor of Word2Vec Word vector conversion, sigma represents a sigmoid function, and XωIs the set of unlabeled text or the set of text positions.

The original correct compound ketoconazole lotion for hair is converted into text vectors and position vectors through the vector conversion, wherein the text vectors are [1.6,1.23,6.91,9.4,12.7,0.3,17.03,2.81,1.04], and the position vectors are [0.11,1.09,3.59,0.4,0.75,2.1,5.1,2.09,3.77]

Preferably, the original text error correction model is obtained by modifying a BERT model (referred to as BERT for short).

In detail, the inputting the position vector set and the text vector set to the original text error correction model for unsupervised training includes: dividing the text vector set into a plurality of groups of word vector sets by taking data in the vector set as a dividing unit, dividing the text vector set into a plurality of groups of paragraph vector sets by taking lines as a dividing unit, calculating the weight relationship among each group of word vector sets, each group of paragraph vector sets and the position vector set, and updating the internal parameters of the original text error correction model according to the weight relationship.

The text vectors of the compound ketoconazole hair lotion are [1.6,1.23,6.91,9.4,12.7,0.3,17.03,2.81 and 1.04]]Position vector of [0.11,1.09,3.59,0.4,0.75,2.1,5.1,2.09,3.77]If aligned with the textThe vector is divided into units of data, then [0.75,2.1 ] can be obtained]、[1.6,2.81,1.04]、[0.3,17.03,2.81,1.04]A plurality of word vector sets in equal forms, if the text vector of the compound ketoconazole hair lotion is a plurality of lines, for example, the expression form isThen divide by line to get [1.6,1.23 ]]、[6.91,9.4]Two sets of paragraph vectors.

Further, the calculating a weight relationship of each group of the word vector set, each group of the paragraph vector set, and the position vector set includes: randomly selecting vectors in any one vector set of the word vector set, the paragraph vector set and the position vector set as target vectors, obtaining a masking vector by using a text masking operation on the target vectors, calculating the weights of the masking vector and the vectors in each vector set to obtain a weight set, and performing weighted fusion on the weight sets to obtain the weight relationship.

For example, the compound ketoconazole lotion has a word vector of [0.3,17.03,2.81,1.04] as the target vector, the text masking operation is to mask any data, and if the text masking operation of [0.3,17.03,2.81,1.04] is changed into [0.3, [ 2.81, ] then the weight set is obtained by calculating the weights of [0.3, [ 2.81, ] and other word vectors, paragraph vectors and position vectors.

In detail, a similarity calculation method may be used to calculate the weight of the occlusion vector and the vector in each vector set, and the similarity calculation method may use a currently disclosed cosine calculation method, euclidean distance method, or the like.

The weighted fusion can adopt a fusion method in a gaussian distribution form, a linear mode (such as a linear function) and a nonlinear mode (such as a quadratic function), for example, the weight set is [0.101,3.091,2.057,0.4,0.756,2.71,5.103 ], the linear function is utilized to perform fusion to obtain a k value and a b value of the linear function, and then the k value and the b value are used as internal parameters of the original text error correction model.

And secondly, performing supervision training on the primary text error correction model by using the marked text set to obtain a standard text error correction model.

The marked text set corresponds to the unmarked text set, and the marked text set is a text set added with labels, as described in step one, the compound ketoconazole lotion and the like can be unmarked texts, even if the compound ketoconazole lotion is in a wrong writing mode, the marked text set adds the labels of correct writing to the compound ketoconazole lotion, and the texts with wrong writing are not generally used.

The supervised training is the same as the unsupervised training in basic form, and in detail, the obtaining of the standard text error correction model by carrying out the supervised training on the primary text error correction model by using the labeled text set includes:

extracting labels of the marked texts from the marked text set to obtain a real label set;

converting the marked text set into a marked text vector set according to the text vector conversion method;

inputting the marked text vector set into the primary text error correction model for supervision training to obtain a prediction label set;

and judging whether the error range of the predicted tag set and the real tag set is larger than a preset error, if so, continuing the supervised training until the error range of the predicted tag set and the real tag set is smaller than the preset error, and quitting the supervised training to obtain a standard text error correction model.

And step three, acquiring a text to be corrected, performing text masking operation on the text to be corrected to obtain one or more groups of masked texts, and inputting the masked texts into the standard text correction model to obtain a predicted text and a prediction probability value of the predicted text.

The compound lotion for dispensing the kerconazole is the text to be corrected if the compound lotion for dispensing the kerconazole is printed into the compound lotion for dispensing the kerconazole by mistake by the printer with the prescription, and the text is masked by the compound lotion for dispensing the kerconazole, so that the masked text such as the compound lotion for dispensing the kerconazole, the compound lotion for dispensing the keronazol and the like can be obtained.

In detail, inputting the masked text into the standard text error correction model to obtain a predicted text and a predicted probability value of the predicted text, including: and converting the shielded text into a shielded vector according to the text vector conversion method, and inputting the shielded vector into the standard text error correction model to obtain a predicted text and a prediction probability value of the predicted text.

And step four, judging whether the predicted text is the same as the text to be corrected.

When the prediction text "compound x azole hair lotion" is predicted as described above, the prediction text "compound ketoconazole hair lotion" is obtained, and it is determined whether the prediction text "compound ketoconazole hair lotion" is the same as the text "compound ketoconazole hair lotion" to be corrected.

And fifthly, if the predicted text is the same as the text to be corrected, text correction is not required to be carried out on the text to be corrected and the text to be corrected is received again.

If the prediction text "compound tungconazole lotion for hair" is the same as the text "compound tungconazole lotion for hair" to be corrected, it proves that the error of the prescription printer is not found.

And step six, if the predicted text is not the same as the text to be corrected, judging whether the predicted probability value is larger than a preset probability value, and if the predicted probability value is smaller than the preset probability value, not needing to correct the text of the text to be corrected and re-receiving the text to be corrected.

If the prediction text "compound ketoconazole lotion for hair" is different from the text "compound ketoconazole lotion for hair" to be corrected, and the prediction probability of the prediction text "compound ketoconazole lotion for hair" is 97%.

If the preset probability value is 99%, the accuracy of the predicted text is not up to the requirement, and therefore text error correction is not performed on the text to be corrected.

And seventhly, performing text error correction on the text to be corrected according to the predicted text if the predicted probability value is greater than the preset probability value.

And if the preset probability value is 96%, replacing the text to be corrected with the prediction text 'compound ketoconazole lotion for hair' to finish text correction.

Further, the integrated modules/units of the electronic device 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a non-volatile computer-readable storage medium. The computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).

In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.

The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.

In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.

It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.

The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.

Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.

Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:文本纠错方法、装置、设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!