Method, device and system for processing delay message

文档序号:1963592 发布日期:2021-12-14 浏览:11次 中文

阅读说明:本技术 延迟消息处理方法、装置与系统 (Method, device and system for processing delay message ) 是由 刘德慧 于 2020-10-29 设计创作,主要内容包括:本公开提供一种延迟消息处理方法、装置与系统。延迟消息处理方法包括:从数据库中读取消费时间在预设时间段内的多个延迟消息,将消费时间早于或等于所述当前时间点的所述延迟消息存储到第一队列;响应延迟消息取出指令返回所述第一队列中的目标延迟消息,并将所述目标延迟消息从所述第一队列转移存储到第二队列;响应对应于所述目标延迟消息的延迟消息消费成功消息,在所述第二队列和所述数据库中删除所述目标延迟消息。本公开实施例可以提高延迟消息的存储容量、处理效率和数据可靠性。(The disclosure provides a method, a device and a system for processing delay messages. The delayed message processing method comprises the following steps: reading a plurality of delay messages with consumption time within a preset time period from a database, and storing the delay messages with consumption time earlier than or equal to the current time point into a first queue; returning a target delay message in the first queue in response to a delay message fetching instruction, and transferring and storing the target delay message from the first queue to a second queue; deleting the target delayed message in the second queue and the database in response to a delayed message consumption success message corresponding to the target delayed message. The embodiment of the disclosure can improve the storage capacity, the processing efficiency and the data reliability of the delay message.)

1. A method for delayed message processing, comprising:

reading a plurality of delay messages with consumption time within a preset time period from a database, and storing the delay messages with consumption time earlier than or equal to the current time point into a first queue;

returning a target delay message in the first queue in response to a delay message fetching instruction, and transferring and storing the target delay message from the first queue to a second queue;

deleting the target delayed message in the second queue and the database in response to a delayed message consumption success message corresponding to the target delayed message.

2. The delayed message processing method as claimed in claim 1, wherein said reading the plurality of delayed messages whose consumption time is within a preset time period from the database comprises:

storing the delay message with the consumption time later than the current time point into a third queue, wherein the third queue is a ring queue with time scales;

and the delayed message which consumes time earlier than or equal to the current time point in the third queue is transferred and stored to the first queue in a timed mode.

3. The delayed message processing method as claimed in claim 1, wherein said reading the plurality of delayed messages whose consumption time is within a preset time period from the database comprises:

reading a plurality of delay messages corresponding to target application numbers and target service numbers from a database according to delay message identifiers, wherein the delay message identifiers are generated according to consumption time, application numbers and service numbers of the corresponding delay messages, and the delay message identifiers comprise time stamps representing the consumption time in millisecond level.

4. The delayed message processing method as claimed in claim 1, further comprising:

and transferring and storing the delay message with the time length exceeding a preset value in the second queue to the first queue.

5. The delayed message processing method as claimed in claim 2, wherein said reading the plurality of delayed messages whose consumption time is within the preset time period from the database comprises:

reading only a plurality of delayed messages which have consumption time within the preset time period and are not currently stored in the first queue, the second queue and the third queue.

6. A delayed message processing apparatus, comprising:

the message extraction module is used for reading a plurality of delay messages with consumption time within a preset time period from the database and storing the delay messages with consumption time earlier than or equal to the current time point into a first queue;

a message dump module configured to return a target delay message in the first queue in response to a delay message fetch instruction, and to transfer and store the target delay message from the first queue to a second queue;

a message deletion module configured to delete the target delayed message in the second queue and the database in response to a delayed message consumption success message corresponding to the target delayed message.

7. A delayed message processing system, comprising:

the system comprises a plurality of service modules, a plurality of service modules and a plurality of service modules, wherein the service modules are used for generating delay messages and consumption delay messages, and sending delay message storage instructions, delay message fetching instructions and delay message consumption success messages;

a database for storing the delay message;

a gateway module, connected to the plurality of service modules, the database and the plurality of delayed message processing devices, for receiving the delayed message storage command to store the delayed message in the database, and distributing the delayed message fetch command and the delayed message consumption success message to the plurality of delayed message processing devices;

a plurality of delayed message processing devices, each of said delayed message processing devices corresponding to one of said service modules, connected to said database and said gateway module, for performing the method according to any one of claims 1 to 5.

8. The delayed message processing system of claim 7 wherein said gateway module is configured to:

obtaining a delay message according to the delay message storage instruction;

generating a delay message identifier according to the application number, the service number and the consumption time of the delay message;

storing the delayed message in the database according to the delayed message identifier.

9. An electronic device, comprising:

a memory; and

a processor coupled to the memory, the processor configured to perform the delayed message processing method of any of claims 1-5 based on instructions stored in the memory.

10. A computer-readable storage medium on which a program is stored, the program implementing the delayed message processing method according to any one of claims 1 to 5 when executed by a processor.

Technical Field

The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, and a system for processing a delay message, which can improve storage capacity, data reliability, and processing efficiency of the delay message.

Background

The delay message refers to a message set by the service system to be delayed to be executed after a certain time. In the Java technology, delay messages are managed by temporarily storing the delay messages in a JVM (Java Virtual Machine) memory by a DelayQueue (delay queue) provided in the Java technology. However, due to the limited capacity of the JVM, this method cannot be applied to the scenario with large data volume (large delay messages), and in addition, the JVM may release the memory when the hardware is restarted, resulting in the loss of the buffered delay messages.

It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.

Disclosure of Invention

An object of the present disclosure is to provide a method, an apparatus, and a system for processing a delayed message, which overcome, at least to some extent, the problems of insufficient storage capacity, data reliability, and processing efficiency of the delayed message due to the limitations and disadvantages of the related art.

According to a first aspect of the embodiments of the present disclosure, there is provided a method for processing a delay message, including: reading a plurality of delay messages with consumption time within a preset time period from a database, and storing the delay messages with consumption time earlier than or equal to the current time point into a first queue; returning a target delay message in the first queue in response to a delay message fetching instruction, and transferring and storing the target delay message from the first queue to a second queue; deleting the target delayed message in the second queue and the database in response to a delayed message consumption success message corresponding to the target delayed message.

In an exemplary embodiment of the present disclosure, the reading the plurality of delay messages with the consumption time within the preset time period from the database includes: storing the delay message with the consumption time later than the current time point into a third queue, wherein the third queue is a ring queue with time scales; and the delayed message which consumes time earlier than or equal to the current time point in the third queue is transferred and stored to the first queue in a timed mode.

In an exemplary embodiment of the present disclosure, the reading the plurality of delay messages with the consumption time within the preset time period from the database includes: and reading a plurality of delay messages corresponding to the target application number and the target service number from the database according to the delay message identifier.

In an exemplary embodiment of the present disclosure, the method further comprises: and transferring and storing the delay message with the time length exceeding a preset value in the second queue to the first queue.

In an exemplary embodiment of the present disclosure, the reading the plurality of delay messages with the consumption time within the preset time period from the database includes: reading only a plurality of delayed messages which have consumption time within the preset time period and are not currently stored in the first queue, the second queue and the third queue.

According to a second aspect of the embodiments of the present disclosure, there is provided a delayed message processing apparatus including: the message extraction module is used for reading a plurality of delay messages with consumption time within a preset time period from the database and storing the delay messages with consumption time earlier than or equal to the current time point into a first queue; a message dump module configured to return a target delay message in the first queue in response to a delay message fetch instruction, and to transfer and store the target delay message from the first queue to a second queue; a message deletion module configured to delete the target delayed message in the second queue and the database in response to a delayed message consumption success message corresponding to the target delayed message.

In an exemplary embodiment of the disclosure, the gateway module is configured to: obtaining a delay message according to the delay message storage instruction; generating a delay message identifier according to the application number, the service number and the consumption time of the delay message; storing the delayed message in the database according to the delayed message identifier.

According to a third aspect of the present disclosure, there is provided a delayed message processing system comprising: the system comprises a plurality of service modules, a plurality of service modules and a plurality of service modules, wherein the service modules are used for generating delay messages and consumption delay messages, and sending delay message storage instructions, delay message fetching instructions and delay message consumption success messages; a database for storing the delay message; a gateway module, connected to the plurality of service modules, the database and the plurality of delayed message processing devices, for receiving the delayed message storage instruction to store the delayed message in the database, and for distributing the delayed message fetch instruction and the delayed message consumption success message to the plurality of delayed message processing devices; a plurality of delayed message processing means, each of said delayed message processing means corresponding to one of said service modules, connected to said database and said gateway module, for performing the method as described in any one of the above.

According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a memory; and a processor coupled to the memory, the processor configured to perform the method of any of the above based on instructions stored in the memory.

According to a fifth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a program which, when executed by a processor, implements a method as in any one of the above.

According to the embodiment of the disclosure, the delay message is stored in the database, and only the delay message in the preset time period is stored in the first queue by reading the database, so that the low storage capacity caused by storing the delay message in the cache in the related art can be avoided, and the efficiency of extracting the delay message is maintained; by transferring and storing the pulled delay message in the second queue and deleting the delay message in the second queue and the database only after the information that the delay message is successfully consumed is acquired, the delay message can be prevented from being deleted before the delay message is not successfully consumed, and the reliability of the data stored by the delay message is effectively improved.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.

Fig. 1 is a schematic diagram of a delayed message processing system in an exemplary embodiment of the present disclosure.

Fig. 2 is a flowchart of a method for delayed message processing in an exemplary embodiment of the present disclosure.

Fig. 3 is a schematic diagram of a delayed message processing system according to an embodiment of the present disclosure executing a delayed message processing method.

Fig. 4 is an interaction diagram of a delayed message processing system in an embodiment of the disclosure.

Fig. 5 is a block diagram of a delayed message processing apparatus in an exemplary embodiment of the present disclosure.

Fig. 6 is a block diagram of an electronic device in an exemplary embodiment of the disclosure.

Detailed Description

Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.

Further, the drawings are merely schematic illustrations of the present disclosure, in which the same reference numerals denote the same or similar parts, and thus, a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.

The following detailed description of exemplary embodiments of the disclosure refers to the accompanying drawings.

Fig. 1 is a schematic diagram of a delayed message processing system in an exemplary embodiment of the present disclosure.

Referring to fig. 1, a delayed message processing system 100 may include:

a plurality of service modules 11 for generating a delay message, consuming the delay message, and sending a delay message storage instruction, a delay message fetch instruction, and a delay message consumption success message;

a database 12 for storing delay messages;

a gateway module 13, connected to the plurality of service modules 11, the database 12 and the plurality of delayed message processing devices 14, for receiving a delayed message storage instruction to store the delayed message in the database 12, and for distributing a delayed message fetch instruction and a delayed message consumption success message to the plurality of delayed message processing devices 14;

and a plurality of delay message processing devices 14, wherein each delay message processing device 14 corresponds to one service module 11 and is connected with the database 12 and the gateway module 13 for executing the delay message processing method provided by the embodiment of the disclosure.

The plurality of service modules 11 may be different service modules in different applications, for example, the applications may be applications on a mobile device side (e.g., a mobile phone, a tablet computer) or applications on a non-mobile device side (e.g., a desktop computer, a notebook computer). In some embodiments, these service modules 11 are implemented by Java language, and in other embodiments, the service modules 11 may be implemented by other programming languages.

The specific types of the database 12 include, but are not limited to, Redis, Sharkstore, Mysql, ES database, and those skilled in the art can set the database according to actual needs. The databases 12 may be located on one or more hardware devices, and the system 100 may implement multiple databases 12 through a distributed cluster deployment.

The number of gateway modules 13 in the system 100 may be one or more. When system 100 is implemented using a distributed cluster deployment over multiple hardware devices, one gateway module 13 may be configured for each hardware device. In addition to receiving the delayed message storage instruction to store the delayed message in the database 12, distributing the delayed message retrieval instruction and the delayed message consumption success message to the plurality of delayed message processing apparatuses 14, the gateway module 13 may limit the service modules 11 that can use the system 100 by a mechanism such as white list authorization, or perform current limiting control for different service modules 11.

The delayed message processing apparatus 14 may be configured to execute the delayed message processing method provided by the embodiment of the present disclosure.

Fig. 2 is a flowchart of a method for delayed message processing in an exemplary embodiment of the present disclosure.

Referring to fig. 2, a delayed message processing method 200 may include:

step S1, reading a plurality of delay messages with consumption time within a preset time period from a database, and storing the delay messages with consumption time earlier than or equal to the current time point into a first queue;

step S2, returning the target delay message in the first queue in response to the delay message fetching instruction, and transferring and storing the target delay message from the first queue to a second queue;

step S3, in response to the delayed message consumption success message corresponding to the target delayed message, deleting the target delayed message in the second queue and the database.

According to the embodiment of the disclosure, the delay message is stored in the database, and only the delay message in the preset time period is stored in the first queue by reading the database, so that the low storage capacity caused by storing the delay message in the cache in the related art can be avoided, and the efficiency of extracting the delay message is maintained; by storing the pulled delay message in the second queue and deleting the delay message in the second queue and the database only after the information that the delay message is successfully consumed is acquired, the delay message can be prevented from being deleted before the delay message is unsuccessfully consumed, and the reliability of the data stored in the delay message is effectively improved.

The steps of the delayed message processing method 100 will be described in detail below.

In step S1, a plurality of delay messages with consumption time within a preset time period are read from the database, and the delay messages with consumption time earlier than or equal to the current time point are stored in the first queue.

In the embodiment of the present disclosure, all the delay messages to be used are stored in the database, and only the delay messages (to be used) in a period of time before the current time point are read from the database to the memory, so as to ensure a fast response to the delay message fetch instruction. The memory may be, for example, a JVM memory. Since the delay message is stored by using the database, the storage capacity of the delay message and the data reliability can be greatly increased. In one embodiment, the preset time period may be, for example, x duration from the current time point to the time point, and x may be set by a person skilled in the art, and may be, for example, 10 minutes.

The first queue is located in memory. The first queue, which is a storage medium that stores delayed messages that have exceeded a predetermined consumption time or immediately exceed the predetermined consumption time, may be referred to as a "ready queue". In some embodiments, the first queue may be unsatisfied or even empty.

In one embodiment, the delay message that consumes the time later than the current time point in the read plurality of delay messages may be stored in a third queue, for example, a circular queue with a time scale, which may be referred to as a "carousel". Because there is no ring structure in the memory, the wheel can be realized by using linear space of array, when the data pointer reaches the tail of the queue, the address of the next pointer becomes the head of the queue, i.e. the switch-back operation, which can be executed by taking the modulus. The storage content in the roulette is stored according to the time scale, and the delay messages with the same consumption time can be stored under one time scale of the roulette, so that the asynchronous thread of the roulette can be used for realizing that the third queue automatically transfers and stores the delay messages with preset time (namely the storage content corresponding to the time scale currently pointed by the pointer) to the first queue. In the embodiment of the present disclosure, the preset time is a current time, i.e., a system time. The asynchronous thread setup carousel may be used to store one or more delayed message transfers in the third queue that consume time up to the current point in time to the first queue for timely retrieval.

In step S2, the target delayed message in the first queue is returned in response to the delayed message fetch instruction, and the target delayed message is transferred from the first queue to be stored in the second queue.

In the embodiment of the present disclosure, one delay message processing device 14 corresponds to one service of one application program. A delayed message fetch instruction received by a delayed message processing apparatus 14 is distributed by the gateway module 13 according to the application number and the service number of the source service module of the delayed message fetch instruction, the source service module corresponding to the delayed message processing apparatus 14.

The delayed message processing means 14 receives the delayed message fetch instruction (POP instruction), and returns all the delayed messages currently in the first queue to the delayed message fetch instruction, so that the number of returned delayed messages may be plural, one, or in some cases zero. That is, if there is no delay message whose consumption time is equal to or earlier than the current time, the delay message fetch instruction cannot obtain the delay message.

In the embodiment of the present disclosure, a batch of delay messages returned by a corresponding delay message fetching instruction are all referred to as target delay messages. And after the target delay message is returned, transferring and storing the target delay message to a second queue. The second queue is set to prevent the delayed message from being deleted when it is not successfully consumed, and is also set in a memory (e.g., JVM memory) in the form of a general queue.

After a batch of target delay messages are transferred and stored to the second queue, the target delay messages do not exist in the first queue within a preset time.

In step S3, in response to the delayed message consumption success message corresponding to the target delayed message, the target delayed message is deleted in the second queue and the database.

In some embodiments, the delay message processing apparatus 14 may delete a batch of target delay messages newly stored in the second queue when the next instruction received after receiving the delay message fetching instruction is a delay message consumption success message, and delete the corresponding delay message in the database according to the delay message identifier of the target delay message.

In other embodiments, the next instruction received by the continuous delay message processing apparatus 14 after receiving a delay message fetch instruction is still a delay message fetch instruction, and there are two cases, the first case is that the first delay messages corresponding to the previous delay message fetch instruction are not consumed successfully, and the second case is that two delay message consumption success messages corresponding to two delay message fetch instructions are received continuously.

According to one embodiment of the disclosure, the asynchronous thread is set to automatically transfer and store the delay message with the storage time length exceeding the preset value in the second queue to the first queue for use. To cope with the first case, the preset value may be smaller than, for example, a normal time interval of two delayed message fetch instructions, which may be obtained by reading a system setting or by counting an average time interval of a plurality of delayed message fetch instructions. When the delayed messages stored overtime in the second queue are set to be automatically returned to the first queue, all the delayed messages stored in the second queue can be guaranteed to be target delayed messages corresponding to the latest delayed message fetching instruction, at this time, all the delayed messages in the second queue can be deleted corresponding to the latest delayed message consumption success message, namely, the second queue is emptied, and meanwhile, the corresponding delayed messages are deleted in the database according to the delayed message identifiers of the deleted delayed messages.

To cope with the second case, it may be arranged to delete delayed messages in the second queue in batches by storage time. For example, after receiving a delay message successful consumption message, determining whether a time interval between the delay message successful consumption message and a previous delay message fetch instruction is within a preset range (in general, a time interval between a delay message fetch instruction and a corresponding delay message successful consumption message has a stable value), if the time interval is within a preset time, it indicates that a delay message successful consumption message corresponding to a latest delay message fetch instruction is received, and an earlier delay message fetch instruction fails to successfully send a delay message successful consumption message, then deleting a batch of target delay messages recently stored in the second queue and corresponding delay messages in the database storage, and transferring and storing the delay messages stored in the second queue at an earlier time into the first queue; if the time interval is not within the preset time, it is indicated that the delayed consumption success message corresponds to an earlier delayed message fetching instruction, at this time, which delayed message fetching instruction corresponds to the current delayed message consumption success message can be determined according to the ratio of the time interval to the preset time, then a batch of target delayed messages corresponding to the delayed message fetching instruction and corresponding delayed messages stored in the database are deleted, delayed messages stored earlier than the batch of target delayed messages are transferred and stored in the first queue, and delayed messages stored later than the batch of target delayed messages are not operated.

At this time, if the asynchronous thread is set to automatically transfer and store the delayed messages with the storage duration exceeding the preset value in the second queue to the first queue, the preset value may be set to be greater than n times of the preset time, where n is an integer greater than or equal to 2, and may be, for example, 3, so as to allow each batch of target delayed messages to be transferred and stored to the first queue when none of the 3 delayed message consumption success messages corresponds to the batch of target delayed messages.

How to set the preset value can be determined by those skilled in the art according to actual requirements, the sending logic of the delayed message fetching instruction and the delayed consumption sending instruction, and the disclosure does not make a special limitation.

After the delay message in the second queue and the corresponding delay message in the database are deleted, the delay message consumption process can be recorded in a preset delay message table and a delay message filing table. The structure of the delayed message table and the delayed message archiving table may be the same, and may be used to record the delayed message identifier, the application number, the service number, the content of the delayed message, the time of creation of the delayed message, the time of consumption, the time of successful consumption, etc. of the processed delayed message.

Because the delay messages stored in the first queue, the second queue and the third queue are delay messages which are not successfully consumed, and the delay messages have corresponding delay messages in the database, in order to avoid repeated consumption as much as possible, when reading a plurality of delay messages of which the consumption time is within a preset time period, only the plurality of delay messages of which the consumption time is within the preset time period and which are not currently stored in the first queue, the second queue and the third queue are read.

Fig. 3 is a schematic diagram of the delayed message processing system 100 provided by the embodiment of the present disclosure executing a delayed message processing method 200.

Referring to fig. 3, in the system 100, the gateway module 13 and the delayed message processing apparatus 14 may be both set by the unified configuration center 15 to execute the delayed message processing method 200.

In executing the delay message processing method 200, the service module 11 generates a delay message and transmits a delay message storage instruction (PUSH instruction) to the gateway module 13 to store the delay message. The delayed Message storage instruction includes an application number (APP Code), a service number, a data isolation identifier (slice ID), and a Message body Content (Message Content). Wherein, the data isolation identification is used for isolating different message body contents.

When the gateway module 13 receives a delay message storage instruction sent by a service module 11, it is determined whether the service module 11 has the authority to use and process the delay message, if so, it is determined whether the service module 11 has remaining traffic to use the system 100, if the service module 11 meets the authority requirement and the traffic requirement, the gateway module 13 obtains the delay message to be stored, and then generates a delay message identifier (delay message ID) according to the consumption time, the application number, and the service number of the delay message.

In the disclosed embodiment, the delayed message identifier may be, for example, 64-bit Long integer (Long type) data. Wherein, to avoid negative numbers, the first bit is not used; the 2 nd to 11 th bits are used for recording an application number, a service number and a data isolation identifier (slice ID): the first 8 bits are used for recording application numbers and service numbers, and the second 2 bits are used for recording slice IDs, and the maximum is 3; 12 th to 52 th bits of time stamp for recording consumption time of the delay message, the time stamp is recorded according to a difference value obtained by subtracting a base point time from a current time, wherein the unit is millisecond, the 41 th bit of data maximally supports the recording of the time stamp of 69 years, and the base point time can be set by self, such as zero point of software release date; the 53 th to 64 th bits are used for recording the sequence number of the current delay message in the delay message with the consumption time of the same millisecond, and the maximum value of the 12 th bit of data is 4095.

After generating the delayed message identifiers, the gateway module 13 stores each delayed message into the delayed message table in the database 12 according to the delayed message identifiers, and stores the content of the delayed message body in fields corresponding to the primary key in the delayed message table by using the delayed message identifiers (delayed message IDs) as the primary keys, where the fields corresponding to one primary key in the delayed message table may include, for example, usage time, application code, service code, data isolation identifier, message body content, and the like.

When each delay message processing device 14 reads a plurality of delay messages whose consumption time is within a preset time period, only the delay message of the target application number and the target service number corresponding to itself is read. By generating the delay message identifier using the application number, the service number, and the consumption time, each delay message processing apparatus 14 can quickly identify the delay message of the corresponding application number and service number only by reading the delay message identifier, and can also quickly identify a plurality of delay messages of which the consumption time is within a preset time period.

When the gateway module 13 receives a delay message fetch instruction (POP instruction) or a delay message successful consumption message sent by the service module 11, it may first obtain an application number and a service number corresponding to the delay message instruction or the delay message successful consumption message, determine whether the service module 11 meets the authority requirement and the flow requirement, and if both of them meet, the gateway module 13 sends the delay message fetch instruction or the delay message successful consumption message to the delay message processing device 14 corresponding to the application number and the service number. When receiving a POP instruction or a delay message consumption success message, the delay message processing device 14 responds to the POP instruction according to the manner of the embodiment shown in fig. 2, which is not described herein again.

When the delay message processing device 14 returns the delay message in response to the delay message reading instruction, a batch of delay messages are sent to the gateway module 13 as the return value of the delay message reading instruction, and the gateway module 13 forwards the batch of delay messages to the corresponding service module 11. After receiving the batch of delay messages, the service module 11 consumes the delay messages and sends a delay message consumption success instruction to the gateway module 13 when the consumption is successful.

The plurality of delay message processing means 14 may be implemented by distributed clusters, with zookeeper implementing the distribution of cluster tasks. zookeeper is a distributed, open source distributed application program coordination service that can provide consistency service for distributed applications. When the system 100 is started, the plurality of delay message processing devices 14 participate in election at the time of starting, a leader (host) is selected by zookeeper host election, the host performs cluster task distribution through communication with other machines in a cluster, corresponding application codes and service codes are distributed to the plurality of delay message processing devices 14, one delay message processing device 14 is responsible for processing delay messages corresponding to one application code and one service code, namely one delay message processing device 14 corresponds to one application code and one service code. In the disclosed embodiment, one delay message processing device 14 may be, for example, one service.

Fig. 4 is an interaction diagram of a delayed message processing system in an embodiment of the disclosure.

Referring to fig. 4, during the operation of the system 100, a plurality of asynchronous threads are actively present, wherein the asynchronous threads which are independently present and can be operated in parallel at least comprise:

the thread 131, a delayed message storage instruction response thread, is started by the gateway module 13, and is configured to respond to the delayed message storage instruction, obtain the delayed message, generate a delayed message identifier according to the application number, the service number, and the consumption time of the delayed message, and store the delayed message in the database according to the delayed message identifier.

The thread 132, which is a delay message fetch instruction distribution thread, is started by the gateway module 13, and is configured to respond to the delay message fetch instruction and distribute the delay message fetch instruction to the delay message processing apparatus 14 corresponding to the application number and the service number.

The thread 133, a delayed message successful consumption message distribution thread, is started by the gateway module 13, and is configured to distribute the delayed message successful consumption message to the delayed message processing apparatus 14 corresponding to the application number and the service number thereof.

And the thread 141, a delayed message reading thread, is started by the delayed message processing device 14, and is used for pulling out a plurality of delayed messages with the consumption time within a preset time period from the database 12 at regular time, storing the delayed messages with the consumption time earlier than or equal to the current time point into the first queue, and storing the delayed messages with the consumption time later than the current time point into the third queue.

Thread 142, the expired message transfer thread, is turned on by the delayed message handling means 14 for storing a delayed message transfer in the third queue whose elapsed time equals the current time to the first queue.

The thread 143, the read message return thread, is started by the delayed message processing means 14, and is configured to transfer and store the delayed message stored in the second queue for more than the preset value to the first queue.

The number of threads 141, 142, 143 is the same as the number of delayed message processing devices 14 and, in some embodiments, the number of traffic modules 11 in the system 100.

The thread 144, a delayed message fetch instruction response thread, is started by the delayed message processing apparatus 14, and is configured to respond to the delayed message fetch instruction, return the target delayed message (all delayed messages) in the first queue to the gateway module 13 as the return information of the delayed message fetch instruction, and then transfer and store the target delayed message in the first queue to the second queue.

The thread 145, a delayed message consumption success message response thread, is started by the delayed message processing apparatus 14, and is configured to respond to the delayed message consumption success message, delete the corresponding target delayed messages in the second queue in batch, and correspondingly delete the corresponding delayed messages in the database 12 of the batch of target delayed messages. In some embodiments, the thread 145 also archives the deferred message processing results after deleting the deferred message.

In addition to the above asynchronous thread that the system 100 continuously works during operation, there is a cluster task allocation thread when the system 100 is started, which is used to start a plurality of delay message processing devices 14 and allocate corresponding application codes and service codes to each delay message processing device 14; the delayed message processing apparatus 14 also creates its own first queue, second queue, and third queue by asynchronous threads after being started.

The delay message processing system 100 and the delay message processing method 200 executed by the same according to the embodiment of the present disclosure manage delay messages by using multiple asynchronous threads, store the delay messages in the database 12, only store delay messages used in the near future in a memory for standby, and delete delay messages stored in the database 12 only after obtaining delay message consumption success messages, so that the storage capacity of the delay messages can be effectively increased and the data reliability of the delay message storage can be improved while ensuring the reading efficiency of the delay messages.

Corresponding to the above method embodiment, the present disclosure further provides a delay message processing apparatus, which may be used to execute the above method embodiment.

Fig. 5 is a block diagram schematically illustrating a delayed message processing apparatus according to an exemplary embodiment of the present disclosure.

Referring to fig. 5, the delayed message processing apparatus 500 may include:

a message extraction module 51, configured to read a plurality of delay messages with consumption time within a preset time period from a database, and store the delay messages with consumption time earlier than or equal to the current time point in a first queue;

a message dump module 52 configured to return the target delayed message in the first queue in response to the delayed message fetch instruction, and to transfer and store the target delayed message from the first queue to a second queue;

a message deletion module 53 configured to delete the target delayed message in the second queue and the database in response to a delayed message consumption success message corresponding to the target delayed message.

In an exemplary embodiment of the present disclosure, the message extraction module 51 is configured to: storing the delay message with the consumption time later than the current time point into a third queue, wherein the third queue is a ring queue with time scales; and the delayed message which consumes time earlier than or equal to the current time point in the third queue is transferred and stored to the first queue in a timed mode.

In an exemplary embodiment of the present disclosure, the message extraction module 51 is configured to: and reading a plurality of delay messages corresponding to the target application number and the target service number from the database according to the delay message identifier.

In an exemplary embodiment of the disclosure, the message unloading module 52 is configured to: and transferring and storing the delay message with the time length exceeding a preset value in the second queue to the first queue.

In an exemplary embodiment of the present disclosure, the message extraction module 51 is configured to: reading only a plurality of delayed messages which have consumption time within the preset time period and are not currently stored in the first queue, the second queue and the third queue.

Since the functions of the apparatus 500 have been described in detail in the corresponding method embodiments, the disclosure is not repeated herein.

It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.

In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.

As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.

An electronic device 600 according to this embodiment of the invention is described below with reference to fig. 6. The electronic device 600 shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.

As shown in fig. 6, the electronic device 600 is embodied in the form of a general purpose computing device. The components of the electronic device 600 may include, but are not limited to: the at least one processing unit 610, the at least one memory unit 620, and a bus 630 that couples the various system components including the memory unit 620 and the processing unit 610.

Wherein the storage unit stores program code that is executable by the processing unit 610 to cause the processing unit 610 to perform steps according to various exemplary embodiments of the present invention as described in the above section "exemplary methods" of the present specification. For example, the processing unit 610 may perform the steps as shown in fig. 1.

The storage unit 620 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)6201 and/or a cache memory unit 6202, and may further include a read-only memory unit (ROM) 6203.

The memory unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.

Bus 630 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.

The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 600, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 600 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 650. Also, the electronic device 600 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 660. As shown, the network adapter 660 communicates with the other modules of the electronic device 600 over the bus 630. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 600, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.

Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.

In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above section "exemplary methods" of the present description, when said program product is run on the terminal device.

The program product for implementing the above method according to an embodiment of the present invention may employ a portable compact disc read only memory (CD-ROM) and include program codes, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.

The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.

Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.

Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).

Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.

Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:批量消息队列的校验方法、电子设备和介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!