Data synchronization method, device, server and storage medium

文档序号:1815873 发布日期:2021-11-09 浏览:21次 中文

阅读说明:本技术 数据同步方法、装置、服务器及存储介质 (Data synchronization method, device, server and storage medium ) 是由 尹嘉峻 于 2021-08-17 设计创作,主要内容包括:本申请适用于大数据技术领域,提供了一种数据同步方法、装置、服务器及存储介质,该方法包括:响应于检测到数据定义语言事件,在主数据库上执行数据定义语言事件所指示的操作,以及生成用于记录数据定义语言事件的操作的第一日志,将第一日志缓存至预先建立的第一缓存中;若预先建立的第二缓存中缓存有用于记录数据操纵语言事件的操作的第二日志,则将第一缓存中的第一日志和第二缓存中的第二日志,以预设的分隔标识区分写入日志文件中;根据分隔标识,将日志文件中的第一日志和第二日志分发至从数据库中的对应工作线程中处理,以便在从数据库上并行执行数据定义语言事件和数据操纵语言事件。此外,本申请还涉及区块链技术。(The application is applicable to the technical field of big data, and provides a data synchronization method, a device, a server and a storage medium, wherein the method comprises the following steps: in response to detecting the data definition language event, executing the operation indicated by the data definition language event on the main database, generating a first log for recording the operation of the data definition language event, and caching the first log into a pre-established first cache; if a second log for recording the operation of the data manipulation language event is cached in a pre-established second cache, writing the first log in the first cache and the second log in the second cache into a log file in a preset separation mark; and according to the separation identification, distributing the first log and the second log in the log file to corresponding work threads in the slave database for processing so as to execute the data definition language event and the data manipulation language event on the slave database in parallel. In addition, the application also relates to a block chain technology.)

1. A method for synchronizing data, the method comprising:

in response to detecting a data definition language event, executing an operation indicated by the data definition language event on a main database, generating a first log for recording the operation of the data definition language event, and caching the first log into a pre-established first cache;

if a second log for recording the operation of the data manipulation language event is cached in a pre-established second cache, writing the first log in the first cache and the second log in the second cache into a log file in a preset separation mark;

and according to the separation identification, distributing the first log and the second log in the log file to corresponding working threads in a slave database for processing so as to execute the data definition language event and the data manipulation language event on the slave database in parallel.

2. The data synchronization method of claim 1, further comprising:

requesting an operating system to allocate two memory spaces, and establishing the two allocated memory spaces as the first cache and the second cache, wherein the first cache is used for storing a first log for recording operations of the data definition language events, and the second cache is used for storing a second log for recording operations of the data manipulation language events.

3. The data synchronization method of claim 1, further comprising:

in response to detecting the data manipulation language event, performing an operation indicated by the data manipulation language event on the master database, and generating the second log for recording the operation of the data manipulation language event, and caching the second log into the second cache established in advance.

4. The data synchronization method according to claim 1, wherein the distributing the first log and the second log in the log file to corresponding work threads in a slave database for processing according to the separation identifier comprises:

and transmitting the log file to a cooperative thread of the slave database, so that when the cooperative thread detects that the log file comprises the separation identifier, the cooperative thread allocates the first log and the second log on two sides of the separation identifier to corresponding working threads in an idle state for processing.

5. The data synchronization method according to claim 4, wherein if there are a plurality of second logs in the log file, the allocating the first log and the second log on both sides of the separation identifier to corresponding working threads in an idle state for processing comprises:

and allocating the first log on one side of the partition identifier to a first working thread in an idle state for processing, and allocating the second logs on the other side of the partition identifier to a plurality of second working threads in an idle state for processing respectively.

6. The data synchronization method of claim 1, further comprising:

and for each working thread, if the working thread finishes processing the target log, executing a preset resource release function to release processing resources allocated to the working thread which finishes processing the target log, and determining the state of the working thread which finishes processing the target log as an idle state, wherein the target log is a first log or/and a second log.

7. The data synchronization method according to any one of claims 1-6, wherein the method further comprises:

if the data definition language event is not detected and the second cache meets the preset transcription condition, writing a second log in the second cache into the log file;

wherein the preset transfer condition comprises at least one of the following items: and the current utilization rate of the second cache is greater than a preset utilization rate threshold value and reaches a preset unloading period.

8. A data synchronization apparatus, the apparatus comprising:

a first execution unit, configured to execute, on a master database, an operation indicated by a data definition language event in response to detecting the data definition language event, and generate a first log for recording the operation of the data definition language event, and cache the first log into a pre-established first cache;

the data writing unit is used for writing a first log in the first cache and a second log in a second cache into a log file in a preset separation mark zone if the second log for recording the operation of the data manipulation language event is cached in the second cache which is established in advance;

and the second execution unit is used for distributing the first log and the second log in the log file to corresponding working threads in a slave database for processing according to the separation identification so as to execute the data definition language event and the data manipulation language event on the slave database in parallel.

9. A server comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 7 are implemented when the computer program is executed by the processor.

10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.

Technical Field

The present application relates to the field of big data technologies, and in particular, to a data synchronization method, apparatus, server, and storage medium.

Background

MySQL is a relational database management system. MySQL generally has a master-slave replication function, that is, Data Definition Language (DDL) events and Data Manipulation Language (DML) events in a master database may be transmitted to a slave database through a binary log, and then operations in the events recorded in the binary log are executed (redo) in the slave database, so as to synchronize Data in the master database to the slave database.

In the related art, after a DDL event is submitted in a master database, the slave database can obviously observe the rise of the delay length, and the delay length can not fall back to the original stable state after a period of time. Therefore, in the related art, it is required to reduce the delay length when processing the DDL event to improve the efficiency of synchronizing data in the master database to the slave database.

Disclosure of Invention

In view of this, embodiments of the present application provide a data synchronization method, an apparatus, a server, and a storage medium, so as to solve the problem in the related art that the efficiency of synchronizing data in a master database to a slave database is not high enough due to a long delay length when processing a DDL event.

A first aspect of an embodiment of the present application provides a data synchronization method, including:

in response to detecting the data definition language event, executing the operation indicated by the data definition language event on the main database, generating a first log for recording the operation of the data definition language event, and caching the first log into a pre-established first cache;

if a second log for recording the operation of the data manipulation language event is cached in a pre-established second cache, writing the first log in the first cache and the second log in the second cache into a log file in a preset separation mark;

and according to the separation identification, distributing the first log and the second log in the log file to corresponding work threads in the slave database for processing so as to execute the data definition language event and the data manipulation language event on the slave database in parallel.

Further, the method further comprises:

requesting an operating system to allocate two memory spaces, and establishing the two allocated memory spaces as a first cache and a second cache, wherein the first cache is used for storing a first log for recording the operation of the data definition language event, and the second cache is used for storing a second log for recording the operation of the data manipulation language event.

Further, the method further comprises:

in response to detecting the data manipulation language event, performing an operation indicated by the data manipulation language event on the master database, and generating a second log for recording the operation of the data manipulation language event, and caching the second log into a pre-established second cache.

Further, according to the separation identifier, distributing the first log and the second log in the log file to corresponding work threads in the slave database for processing, including:

and transmitting the log file to a cooperative thread of the slave database, so that when the cooperative thread detects that the log file comprises the separation identifier, the first log and the second log on two sides of the separation identifier are distributed to corresponding working threads in an idle state to be processed respectively.

Further, if there are a plurality of second logs in the log file, allocating the first log and the second log on the two sides of the separation identifier to corresponding working threads in an idle state for respective processing, including:

and allocating the first log on one side of the partition identifier to a first working thread in an idle state for processing, and allocating the plurality of second logs on the other side of the partition identifier to a plurality of second working threads in an idle state for processing respectively.

Further, the method further comprises:

and for each working thread, if the working thread finishes processing the target log, executing a preset resource release function to release processing resources allocated to the working thread finishing processing the target log, and determining the state of the working thread finishing processing the target log as an idle state, wherein the target log is a first log or/and a second log.

Further, the method further comprises:

if the data definition language event is not detected and the second cache meets the preset transcription condition, writing a second log in the second cache into a log file;

wherein the preset transfer condition comprises at least one of the following items: the current utilization rate of the second cache is greater than a preset utilization rate threshold value and reaches a preset unloading period.

A second aspect of an embodiment of the present application provides a data synchronization apparatus, including:

a first execution unit configured to execute an operation indicated by a DDL event on a master database in response to detection of the DDL event, and generate a first log for recording the operation of the DDL event, the first log being cached in a first cache established in advance;

the data writing unit is used for writing a first log in the first cache and a second log in the second cache into a log file in a preset separation mark distinction mode if a second log for recording the operation of the data manipulation language event is cached in the second cache which is established in advance;

and the second execution unit is used for distributing the first log and the second log in the log file to corresponding working threads in the slave database for processing according to the separation identification so as to execute the DDL event and the DML event on the slave database in parallel.

A third aspect of embodiments of the present application provides a server, including a memory, a processor, and a computer program stored in the memory and executable on the server, where the processor implements the steps of the data synchronization method provided in the first aspect when executing the computer program.

A fourth aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the data synchronization method provided in the first aspect.

The data synchronization method, the data synchronization device, the server and the storage medium provided by the embodiment of the application have the following beneficial effects: the method comprises the steps of caching the logs of the DDL events and the logs of the DML events in different storage spaces, writing the logs into log files in a manner that the logs of the DDL events and the logs of the DML events are distinguished and written into the log files by adopting separation marks, so that the logs of the DDL events and the logs of the DML events are distinguished and written accurately, the DDL events and the DML events are executed in parallel on a slave database, and the efficiency of data synchronization between a master database and the slave database is improved.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the embodiments or the related technical descriptions will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.

Fig. 1 is a flowchart of an implementation of a data synchronization method provided in an embodiment of the present application;

FIG. 2 is a flow chart of another implementation of a data synchronization method provided in an embodiment of the present application;

fig. 3 is a block diagram of a data synchronization apparatus according to an embodiment of the present application;

fig. 4 is a block diagram of a server according to an embodiment of the present disclosure.

Detailed Description

In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.

The data synchronization method according to the embodiment of the present application may be executed by a control server (hereinafter, referred to as a "server"). When the data synchronization method is executed by the server, the execution subject is the server.

Referring to fig. 1, fig. 1 shows a flowchart of an implementation of a data synchronization method provided in an embodiment of the present application, including:

in step 101, in response to detecting the DDL event, executing an operation indicated by the DDL event on a master database, and generating a first log for recording the operation of the DDL event, caching the first log into a pre-established first cache.

The DDL event may include one operation or may include a plurality of operations, and the operation included in the DDL event is a DDL operation. The first log described above is typically a log of operations for recording DDL events. The first cache is typically a pre-established memory space.

Here, the user may submit the DDL event to the server through the user terminal. In this way, the server may detect the DDL event and then perform the DDL operation included in the DDL event on a master database managed by the server. When a server performs a DDL operation, a first log for recording the performed operation may be generated.

Step 102, if a second log for recording the operation of the data manipulation language event is cached in a pre-established second cache, writing the first log in the first cache and the second log in the second cache into a log file in a preset separation mark.

Wherein the second cache is typically used to store a second log of operations for logging DML events. The second cache is typically a pre-established memory space. It should be noted that the first cache and the second cache are two different memory spaces. The second log is typically a log of operations for recording DML events. The operations included in a DML event are typically DML operations. The DML event may include one operation or may include a plurality of operations.

The partition mark is usually a preset mark. As an example, the separation identifier may be "separator _ log _ event," or may be other identifiers.

Here, the execution body may write the first log and the second log with the separation flag as a separation into the log file at the same time.

In practice, there is usually much log data in the first log, and there is also usually much log data in the second log. In order to ensure that the logs of the same event are continuous, the executing body may write all log data in the first log under the same DDL event into the log file at one time, and write all log data in the second log under the same DML event into the log file at one time.

And 103, distributing the first log and the second log in the log file to corresponding working threads in the slave database for processing according to the separation identifier, so that the DDL event and the DML event are executed on the slave database in parallel.

Here, the execution subject may distribute the first log and the second log in the log file to different work threads for processing. In this way, the first log and the second log are processed by the multiple threads at the same time, so that the first log and the second log can be processed in parallel, and the DDL event and the DML event can be executed in parallel from the database.

According to the data synchronization method provided by the embodiment, the logs of the DDL events and the logs of the DML events are cached in different storage spaces, and when the logs are written into the log files, the logs of the DDL events and the logs of the DML events are written into the log files in a distinguishing manner by adopting the separation identifiers, so that the logs of the two different events can be distinguished accurately, the DDL events and the DML events can be executed on the slave database in parallel, and the efficiency of data synchronization between the master database and the slave database can be improved.

In an optional implementation manner of each embodiment of the present application, the data synchronization method may further include: in response to detecting the DML event, performing the operation indicated by the DML event on the master database, generating a second log for recording the operation of the DML event, and caching the second log into a pre-established second cache.

Here, the user may submit the DML event to the server through the user terminal. Thus, the server may detect the DML event and then perform the DML operation included in the DML event on the master database managed by the server. While the server performs the DML operation, a second log for recording the performed operation may be generated. And caching the generated second log into a pre-established second cache.

It is noted that there may be a second log for a DML event. The second cache may store one second log for one DML event, or may store a plurality of second logs for a plurality of DML events at the same time. Thus, the obtained log file may have one second log or may have a plurality of second logs. When a plurality of second logs exist in the log file, the plurality of second logs can be distributed to a plurality of different working threads to be processed, so that a plurality of DML events can be executed on the slave database in parallel.

For example, if 3 second logs for 3 DML events are stored in the second cache, there are 3 second logs in the obtained log file. At this time, the execution agent may distribute the 3 second logs to 3 different work threads for processing. In this way, 3 DML events can be executed in parallel on the slave database.

In an optional implementation manner of each embodiment of the present application, if a DDL event is not detected and the second cache meets a preset transfer condition, a second log in the second cache is written into a log file.

The preset transfer condition is usually a preset condition. In practice, the preset transcription conditions may include, but are not limited to, at least one of: and the current utilization rate of the second cache is greater than a preset utilization rate threshold value, and a preset unloading period is reached. In practice, the preset transferring condition may further include that the number of the second logs in the second cache is greater than a preset value, and the like.

The predetermined usage threshold is typically a predetermined value, such as 0.8. In practice, the above-mentioned preset usage threshold is typically greater than 0 and less than 1. The predetermined unloading period is usually a predetermined period value, for example, 1 second.

Here, when the DDL event is not detected, if the second cache meets the preset transfer condition, the second log in the second cache is directly written into the log file, so that the entire database can be well managed.

In some optional implementation manners of this embodiment, distributing the first log and the second log in the log file to corresponding work threads in the slave database for processing according to the separation identifier includes:

and transmitting the log file to a cooperative thread of the slave database, so that when the cooperative thread detects that the log file comprises the separation identifier, the first log and the second log on two sides of the separation identifier are distributed to corresponding working threads in an idle state to be processed respectively.

The cooperative thread is typically a thread for task allocation to a worker thread. The slave database may have one cooperative thread for assigning tasks and multiple worker threads for executing tasks.

Here, the execution subject may divide the first log and the second log in the log file into different working threads in an idle state for processing by a cooperative thread of the slave database.

In practice, the execution agent may pass the log file to a cooperative thread from the database. In this way, the cooperative thread can distribute each log in the log file to the worker thread for processing. Specifically, the cooperative thread may first detect a separation flag in the log file, and if detected, may use the separation flag as a separation, allocate a first log on one side of the separation flag to a plurality of worker thread processes in an idle state, and allocate a second log on the other side of the separation flag to another plurality of worker thread processes in the idle state.

It should be noted that, the specific log distribution task is executed by the cooperative thread of the slave database, so that the data processing resource of the server can be saved.

In some optional implementations, the distributing the first log and the second log in the log file to corresponding work threads in the slave database for processing according to the separation identifier may also include:

first, a first log on the partition marker side is allocated to a first worker thread in an idle state, and analyzed to obtain and execute a statement of an operation of a DDL event recorded in the first log.

And then, distributing the second log on the other side of the separation mark to a second working thread in an idle state for analysis, obtaining the statement of the operation of the DML event recorded by the second log and executing the statement.

The first working thread and the second working thread are two different working threads.

Here, the execution agent may distribute the first log on the partition label side to the first worker thread to analyze the first log to obtain a statement of the operation of the DDL event recorded in the first log. Then, the first working thread executes the statement to realize that the first working thread executes the DDL event corresponding to the first log. Meanwhile, the execution main body also distributes the second log on the other side of the separation mark to a second working thread for analysis so as to obtain the statement of the operation of the DML event recorded by the second log. And then, the second working thread executes the statement to realize that the second working thread executes the DML event corresponding to the second log.

Here, the execution subject may distribute the first log and the second log in the log file to different work threads for processing. The first log and the second log are processed by the multiple threads at the same time, so that the first log and the second log can be processed in parallel, and the DDL event and the DML event can be executed in parallel from the database.

In the foregoing implementation manner, if there are a plurality of second logs in the log file, allocating the first log and the second log on two sides of the separation identifier to corresponding working threads in an idle state for respective processing, including:

and allocating the first log on one side of the partition identifier to a first working thread in an idle state for processing, and allocating the plurality of second logs on the other side of the partition identifier to a plurality of second working threads in an idle state for processing respectively.

Here, when a plurality of second logs exist in the log file, the plurality of second logs may be distributed to a plurality of different second work threads and analyzed. The method can realize that a plurality of DML events are executed on the slave database in parallel, and is beneficial to further improving the efficiency of data synchronization between the master database and the slave database.

In some optional implementations of various embodiments of the present application, the data synchronization method may further include:

and for each working thread, if the working thread finishes processing the target log, executing a preset resource release function to release processing resources allocated to the working thread finishing processing the target log, and determining the state of the working thread finishing processing the target log as an idle state.

The target log is the first log or/and the second log.

The preset resource release function is generally a preset function for releasing processing resources.

Here, when a worker thread finishes processing the log related to the whole DDL event, the processing resource of the worker thread is released, that is, the processing resource is released once after an event is processed. Compared with the method that the processing resource is released once per operation in the event is processed, the time and the times for executing the resource release are obviously much less, and the data processing efficiency can be improved.

In addition, after the preset resource release function is executed to release the processing resources allocated to the working thread, the state of the working thread is determined to be an idle state, so that the task can be conveniently allocated to the working thread subsequently, and the availability of the working thread is improved.

With further reference to fig. 2, fig. 2 is a flowchart illustrating another implementation of a data synchronization method according to an embodiment of the present disclosure. As shown in fig. 2, the data synchronization method may include the steps of:

step 201, requesting to allocate two memory spaces to the operating system, and establishing the allocated two memory spaces as a first cache and a second cache.

The first cache is used for storing a first log of operations for recording DDL events, and the second cache is used for storing a second log of operations for recording DML events.

In practice, when a user terminal establishes a communication connection with a server, two memory spaces are requested to be allocated to obtain a first cache and a second cache.

Here, the execution subject may request the operating system to allocate two memory spaces, and then determine one of the memory spaces as a first cache and the other memory space as a second cache. The first cache is established for storing a first log of operations to record DDL events, and the second cache is established for storing a second log of operations to record DML events.

In response to detecting the DDL event, the operation indicated by the DDL event is executed on the master database, and a first log of the operation for recording the DDL event is generated, and the first log is cached into a pre-established first cache, step 202.

Step 203, if a second log for recording the operation of the data manipulation language event is cached in a pre-established second cache, writing the first log in the first cache and the second log in the second cache into a log file in a preset separation mark.

And step 204, distributing the first log and the second log in the log file to corresponding working threads in the slave database for processing according to the separation identifier, so as to execute the DDL event and the DML event on the slave database in parallel.

In the present embodiment, the specific operations of steps 202-204 are substantially the same as the operations of steps 101-103 in the embodiment shown in fig. 1, and are not repeated herein.

It should be noted that, different caches are allocated for two different events, and the caching of the logs of the DDL events and the logs of the DML events in different storage spaces can be realized.

In all embodiments of the present application, the server may execute an operation indicated by the DDL event on the master database upon detecting the DDL event, and generate a first log for recording the operation of the DDL event, and cache the first log into a pre-established first cache. And then, writing a first log in the first cache and a second log in a second cache into a log file in a preset separation mark, wherein the second cache stores a second log for recording the operation of the DML event. And finally, distributing the first log and the second log in the log file to corresponding working threads in the slave database for processing according to the separation identification so as to execute the DDL event and the DML event on the slave database in parallel. The server can upload the log file, the first log in the log file and the identifier of the working thread corresponding to the first log, the second log in the log file and the identifier of the working thread corresponding to the second log, and the separation identifier to the block chain, so that the safety and the fair transparency to the user can be ensured. The user equipment may download the data information from the blockchain to verify whether the data information is tampered. The blockchain referred to in this example is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm, and the like. The block chain (Blockchain), which is essentially a decentralized storage server, is a string of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, which is used for verifying the validity (anti-counterfeiting) of the information and generating a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.

Referring to fig. 3, fig. 3 is a block diagram of a data synchronization apparatus 300 according to an embodiment of the present disclosure. In this embodiment, each unit included in the data synchronization apparatus is configured to execute each step in the corresponding embodiment shown in fig. 1 to fig. 2. Please refer to fig. 1 to 2 and fig. 1 to 2 for the corresponding embodiments. For convenience of explanation, only the portions related to the present embodiment are shown. Referring to fig. 3, the data synchronization apparatus 300 includes:

a first execution unit 301, configured to execute an operation indicated by a DDL event on a master database in response to detecting the DDL event, and generate a first log for recording the operation of the DDL event, and cache the first log into a pre-established first cache;

a data writing unit 302, configured to, if a second log used for recording an operation of a data manipulation language event is cached in a pre-established second cache, write a first log in the first cache and a second log in the second cache into a log file in a manner of distinguishing a preset separation identifier;

and the second execution unit 303 is configured to distribute the first log and the second log in the log file to corresponding work threads in the slave database for processing according to the separation identifier, so as to execute the DDL event and the DML event in parallel on the slave database.

As an embodiment of the present application, the apparatus may further include a cache establishing unit (not shown in the figure). Wherein the cache establishing unit is configured to: requesting an operating system to allocate two memory spaces, and establishing the two allocated memory spaces as a first cache and a second cache, wherein the first cache is used for storing a first log for recording operations of DDL events, and the second cache is used for storing a second log for recording operations of DML events.

As an embodiment of the present application, the apparatus may further include a data caching unit (not shown in the figure). Wherein, the data buffer unit is used for: in response to detecting the data manipulation language event, performing an operation indicated by the data manipulation language event on the master database, and generating a second log for recording the operation of the data manipulation language event, and caching the second log into a pre-established second cache.

As an embodiment of the present application, the second executing unit 303 is specifically configured to: and transmitting the log file to a cooperative thread of the slave database, so that when the cooperative thread detects that the log file comprises the separation identifier, the first log and the second log on two sides of the separation identifier are distributed to corresponding working threads in an idle state to be processed respectively.

As an embodiment of the present application, if there are a plurality of second logs in the log file, the second executing unit 303 is further specifically configured to: and allocating the first log on one side of the partition identifier to a first working thread in an idle state for processing, and allocating the plurality of second logs on the other side of the partition identifier to a plurality of second working threads in an idle state for processing respectively.

As an embodiment of the present application, the apparatus may further include a resource releasing unit (not shown in the figure). Wherein, the resource releasing unit may be configured to: and for each working thread, if the working thread finishes processing the target log, executing a preset resource release function to release processing resources allocated to the working thread finishing processing the target log, and determining the state of the working thread finishing processing the target log as an idle state, wherein the target log is a first log or/and a second log.

As an embodiment of the present application, the apparatus may further include a transfer determination unit (not shown in the figure). Wherein the transfer judging unit may be configured to: if the data definition language event is not detected and the second cache meets the preset transcription condition, writing a second log in the second cache into a log file;

wherein the preset transfer condition comprises at least one of the following items: the current utilization rate of the second cache is greater than a preset utilization rate threshold value and reaches a preset unloading period.

The device provided by the embodiment caches the logs of the DDL event and the logs of the DML event in different storage spaces, and writes the logs into the log files in a manner of distinguishing the logs of the DDL event from the logs of the DML event by using the separation identifier, so that the logs of two different events can be distinguished accurately, and thus the DDL event and the DML event can be executed in parallel on the slave database, and the efficiency of data synchronization between the master database and the slave database can be improved.

It should be understood that, in the structural block diagram of the data synchronization apparatus shown in fig. 3, each unit is configured to execute each step in the embodiment corresponding to fig. 1 to fig. 2, and each step in the embodiment corresponding to fig. 1 to fig. 2 has been explained in detail in the above embodiment, and please refer to the relevant description in the embodiments corresponding to fig. 1 to fig. 2 and fig. 1 to fig. 2 specifically, which is not described herein again.

Fig. 4 is a block diagram of a server according to another embodiment of the present application. As shown in fig. 4, the server 400 of this embodiment includes: a processor 401, a memory 402 and a computer program 403, for example a program for a data synchronization method, stored in the memory 402 and executable on the processor 401. The processor 401, when executing the computer program 403, implements the steps in the embodiments of the data synchronization methods described above, such as the steps 101 to 103 shown in fig. 1, or the steps 201 to 204 shown in fig. 2. Alternatively, when the processor 401 executes the computer program 403, the functions of the units in the embodiment corresponding to fig. 3, for example, the functions of the units 301 to 303 shown in fig. 3, are implemented, for which reference is specifically made to the relevant description in the embodiment corresponding to fig. 3, which is not described herein again.

Illustratively, the computer program 403 may be divided into one or more units, which are stored in the memory 402 and executed by the processor 401 to accomplish the present application. One or more of the elements may be a series of computer program instruction segments that can perform particular functions and that describe the execution of the computer program 403 in the server 400. For example, the computer program 403 may be divided into a first execution unit, a data writing unit, and a second execution unit, and the specific functions of the units are as described above.

The server may include, but is not limited to, a processor 401, a memory 402. Those skilled in the art will appreciate that fig. 4 is merely an example of a server 400 and does not constitute a limitation on server 400, and may include more or fewer components than shown, or some components in combination, or different components, e.g., a turntable device may also include input output devices, network access devices, buses, etc.

The Processor 401 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.

The storage 402 may be an internal storage unit of the server 400, such as a hard disk or a memory of the server 400. The memory 402 may also be an external storage device of the server 400, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the server 400. Further, the memory 402 may also include both internal storage units of the server 400 and external storage devices. The memory 402 is used for storing computer programs and other programs and data required by the turntable device. The memory 402 may also be used to temporarily store data that has been output or is to be output.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The integrated module, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. The computer readable storage medium may be non-volatile or volatile. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the computer readable storage medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable storage media that does not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.

The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于日志树和解析树的日志事件提取方法及系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!