Data cache processing method and device

文档序号:378689 发布日期:2021-12-10 浏览:36次 中文

阅读说明:本技术 一种数据缓存的处理方法及装置 (Data cache processing method and device ) 是由 孙晓波 于 2020-12-24 设计创作,主要内容包括:本发明公开了一种数据缓存的处理方法及装置,该方法的一具体实施方式应用于本地缓存,所述本地缓存包括两个阻塞队列,以及用于指示所述两个阻塞队列切换的指针;包括:获取待处理数据;确定所述指针当前指向的所述两个阻塞队列中的第一阻塞队列,将所述待处理数据写入所述第一阻塞队列;当所述第一阻塞队列中的待处理数据满足预设条件时,将所述第一阻塞队列中的待处理数据复制到目标线程中,以利用所述目标线程对所述待处理数据进行处理;将所述指针指向所述两个阻塞队列中的第二阻塞队列,以利用所述第二阻塞队列继续写入待处理数据,涉及计算机技术领域。该实施方式提高了传输速度及吞吐量。(The invention discloses a processing method and a device of a data cache, wherein a specific implementation mode of the method is applied to a local cache, and the local cache comprises two blocking queues and a pointer for indicating the switching of the two blocking queues; the method comprises the following steps: acquiring data to be processed; determining a first blocking queue of the two blocking queues currently pointed by the pointer, and writing the data to be processed into the first blocking queue; when the data to be processed in the first blocking queue meets a preset condition, copying the data to be processed in the first blocking queue to a target thread so as to process the data to be processed by utilizing the target thread; and pointing the pointer to a second blocking queue of the two blocking queues to continuously write the data to be processed by utilizing the second blocking queue, relating to the technical field of computers. This embodiment improves transmission speed and throughput.)

1. A processing method of data cache is characterized in that the processing method is applied to a local cache, wherein the local cache comprises two blocking queues and a pointer for indicating the switching of the two blocking queues; the method comprises the following steps:

acquiring data to be processed;

determining a first blocking queue of the two blocking queues currently pointed by the pointer, and writing the data to be processed into the first blocking queue;

when the data to be processed in the first blocking queue meets a preset condition, copying the data to be processed in the first blocking queue to a target thread so as to process the data to be processed by utilizing the target thread;

and pointing the pointer to a second blocking queue of the two blocking queues so as to continuously write the data to be processed by utilizing the second blocking queue.

2. The method of claim 1, further comprising:

locking the process of the pointer pointing to a second blocking queue from the first blocking queue, and releasing the lock after the pointer points to the second blocking queue.

3. The method of claim 1, wherein the processing the data to be processed by the target thread comprises:

and adopting a thread sealing technology to seal the data to be processed in a target thread so as to write the data to be processed into the distributed cache by utilizing the target thread.

4. The method of claim 3, after said writing the pending data to a distributed cache using the target thread, further comprising:

and releasing the data to be processed in the target thread.

5. The method of claim 1, further comprising: writing a time length threshold or a storage space threshold in a callback mode;

the preset conditions include: the storage space of the data to be processed is not less than the storage space threshold, or the time length for writing the data to be processed into the first blocking queue is not less than the time length threshold.

6. The method of claim 1,

the storage space of the first blocking queue and the storage space of the second blocking queue are equal.

7. The method according to any one of claims 1 to 6,

the first blocking Queue and/or the second blocking Queue are/is a Current Linked Queue.

8. A data caching processing apparatus, comprising: the system comprises a data writing module, a data copying module and a switching module; wherein the content of the first and second substances,

the data writing module is used for acquiring data to be processed, determining a first blocking queue of two blocking queues currently pointed by a pointer in a local cache, and writing the data to be processed into the first blocking queue; wherein the local cache comprises: the two blocking queues and a pointer for indicating the switching of the two blocking queues;

the data copying module is used for copying the data to be processed in the first blocking queue to a target thread when the data to be processed in the first blocking queue meets a preset condition so as to process the data to be processed by utilizing the target thread;

and the switching module is used for pointing the pointer to a second blocking queue of the two blocking queues so as to continuously write the data to be processed by utilizing the second blocking queue.

9. An electronic device for processing data, comprising:

one or more processors;

a storage device for storing one or more programs,

when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.

10. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-7.

Technical Field

The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for processing a data cache.

Background

In a high concurrency scenario, when the amount of concurrency is high enough, a local cache is often used to reduce the pressure of a third-party cache (e.g., a distributed cache).

When the local cache is used for reading and writing data, the data is stored into the local cache, and locking is needed at the moment. And when the data volume or the time interval of the local cache reaches a threshold value, reading the data from the local cache, and writing the data into a third-party cache, wherein locking is also needed in the process. And when the third-party cache successfully writes the data, deleting the corresponding data and releasing the lock.

In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:

when data in the local cache is written into a third-party cache, network IO (Input/Output) is involved, and time consumption is long; in a high concurrency scenario, when the data volume is large enough, the local cache will reach a threshold value quickly, a lock will be triggered, and the write operation can only wait until the data is written into the third-party cache and the local cache is emptied, which will greatly reduce the transmission speed and throughput.

Disclosure of Invention

In view of this, embodiments of the present invention provide a method and an apparatus for processing a data cache, which can improve transmission speed and throughput.

To achieve the above object, according to an aspect of an embodiment of the present invention, a method for processing a data cache is provided.

The data cache processing method is applied to a local cache, wherein the local cache comprises two blocking queues and a pointer for indicating the switching of the two blocking queues; the method comprises the following steps:

acquiring data to be processed;

determining a first blocking queue of the two blocking queues currently pointed by the pointer, and writing the data to be processed into the first blocking queue;

when the data to be processed in the first blocking queue meets a preset condition, copying the data to be processed in the first blocking queue to a target thread so as to process the data to be processed by utilizing the target thread;

and pointing the pointer to a second blocking queue of the two blocking queues so as to continuously write the data to be processed by utilizing the second blocking queue.

Optionally, the method further comprises:

locking the process of the pointer pointing to a second blocking queue from the first blocking queue, and releasing the lock after the pointer points to the second blocking queue.

Optionally, the processing the data to be processed by using the target thread includes:

and adopting a thread sealing technology to seal the data to be processed in a target thread so as to write the data to be processed into the distributed cache by utilizing the target thread.

Optionally, after the writing, by the target thread, the data to be processed into the distributed cache, the method further includes:

and releasing the data to be processed in the target thread.

Optionally, the method further comprises: writing a time length threshold or a storage space threshold in a callback mode;

the preset conditions include: the storage space of the data to be processed is not less than the storage space threshold, or the time length for writing the data to be processed into the first blocking queue is not less than the time length threshold.

Optionally, the storage space of the first blocking queue and the second blocking queue is equal.

Optionally, the first blocking Queue and/or the second blocking Queue is a ConcurrentLinked Queue.

To achieve the above object, according to another aspect of the embodiments of the present invention, a data caching processing apparatus is provided.

The processing device of the data cache of the embodiment of the invention comprises: the system comprises a data writing module, a data copying module and a switching module; wherein the content of the first and second substances,

the data writing module is used for acquiring data to be processed, determining a first blocking queue of two blocking queues currently pointed by a pointer in a local cache, and writing the data to be processed into the first blocking queue; wherein the local cache comprises: the two blocking queues and a pointer for indicating the switching of the two blocking queues;

the data copying module is used for copying the data to be processed in the first blocking queue to a target thread when the data to be processed in the first blocking queue meets a preset condition so as to process the data to be processed by utilizing the target thread;

and the switching module is used for pointing the pointer to a second blocking queue of the two blocking queues so as to continuously write the data to be processed by utilizing the second blocking queue.

To achieve the above object, according to still another aspect of embodiments of the present invention, there is provided an electronic device that processes data.

An electronic device for processing data according to an embodiment of the present invention includes: one or more processors; the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors implement the data caching processing method of the embodiment of the invention.

To achieve the above object, according to still another aspect of embodiments of the present invention, there is provided a computer-readable storage medium.

A computer-readable storage medium of an embodiment of the present invention stores thereon a computer program, and when the program is executed by a processor, the computer program implements a data cache processing method of an embodiment of the present invention.

One embodiment of the above invention has the following advantages or benefits: two blocking queues are arranged in a local cache, and the two blocking queues are switched by using the pointing of a pointer. When receiving data to be processed, writing the data to be processed into a first blocking queue pointed by a pointer at present; when the data in the first blocking queue meets the preset condition, the data can be processed through the target thread, and meanwhile, the pointer points to the second blocking queue so as to continuously store the subsequent data to be processed by utilizing the second blocking queue. In the process, due to the adoption of the mode of switching the two blocking queues, the second blocking queue is adopted to continuously write in subsequent data while the first blocking queue copies the data to the target thread, so that the writing waiting time is reduced, and the transmission speed and the throughput are improved. In addition, because the blocking queue is adopted, locking is not needed in the process of writing the data into the blocking queue, and therefore the transmission speed and the throughput are further improved by reducing the number of locks.

Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.

Drawings

The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:

fig. 1 is a schematic diagram of main steps of a processing method of a data cache according to an embodiment of the present invention;

FIG. 2 is a flow chart illustrating another data caching processing method according to an embodiment of the present invention;

FIG. 3 is a diagram illustrating major steps of a further data caching processing method according to an embodiment of the present invention;

FIG. 4 is a schematic diagram of the main modules of a data caching processing device according to an embodiment of the present invention;

FIG. 5 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;

fig. 6 is a schematic block diagram of a computer system suitable for use in implementing a terminal device or server of an embodiment of the invention.

Detailed Description

Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.

It should be noted that the embodiments of the present invention and the technical features of the embodiments may be combined with each other without conflict.

Fig. 1 is a schematic diagram illustrating main steps of a data caching processing method according to an embodiment of the present invention.

As shown in fig. 1, a processing method of a data cache according to an embodiment of the present invention may be applied to a local cache, where the local cache includes two blocking queues and a pointer for indicating switching between the two blocking queues; the method mainly comprises the following steps:

step S101: and acquiring data to be processed.

The data to be processed is data to be written into the local cache. For example, in a scenario where a local cache is added between an application and a redis to reduce the redis pressure, the data to be processed may be data generated by the application or data received by the application.

Step S102: and determining a first blocking queue of the two blocking queues currently pointed by the pointer, and writing the data to be processed into the first blocking queue.

Because two blocking queues are arranged in the local cache and the pointer is used for indicating the switching of the two blocking queues. When data is to be written into the local cache, for example, when a put method is called to write data into the local cache, the data is written into the current queue, where the current queue is a pointer, and if the pointer points to the first blocking queue, the data to be processed directly passes through the pointer of the current queue and is written into the first blocking queue. Here, the blocking queue may be a blocking queue in java, and by the characteristic of the blocking queue, when there is no data in the queue, all threads on the consumer side are automatically blocked (suspended) until there is data to be put into the queue, and when the queue is full of data, all threads on the producer side are automatically blocked (suspended) until there is an empty position in the queue, and the threads are automatically woken up, the thread security of the data write operation can be ensured, so that the data write process does not need to add a lock, thereby ensuring the performance and throughput of the local cache when facing high concurrent writes.

It is to be understood that the first blocking queue and the second blocking queue in the embodiment of the present invention are not sequentially divided, and the first and second descriptions are not limited to the two blocking queues in any way, but are only used for explaining the switching of the pointer between the two blocking queues. When the two blocking queues are ready to write data to be processed, the blocking queue to which the pointer currently points is the first blocking queue, for example, when a blocking queue a and a blocking queue B are arranged in the local cache, if the data to be processed is ready to be written, the blocking queue to which the pointer currently points is the blocking queue a, then the blocking queue a is the first blocking queue. When the data to be processed is ready to be written at another time, the pointer currently points to the blocking queue B, and the blocking queue B is the first blocking queue at this time.

Step S103: when the data to be processed in the first blocking queue meets a preset condition, copying the data to be processed in the first blocking queue to a target thread so as to process the data to be processed by utilizing the target thread.

Wherein the preset conditions include: the storage space of the data to be processed is not less than the storage space threshold, or the time length for writing the data to be processed into the first blocking queue is not less than the time length threshold. The storage space threshold and the duration threshold may be specified by a user in a callback mode, for example, in java8 and above, the storage space threshold and the duration threshold may be specified by a provider in a stream, so that a user operation is handed over to a cache to control time and space in the callback mode. For example, when the storage space threshold is 2M and the duration threshold is 1s, if the pending data stored in the first blocking queue reaches 2M or the time interval reaches 1s, the pending data in the first blocking queue is copied to the target thread, so as to process the pending data by using the target thread.

In order to ensure thread safety, a thread sealing technology can be adopted to seal the data to be processed in the target thread, so that the data to be processed is written into the distributed cache by the target thread. Therefore, in the process of writing the data to be processed into the distributed cache through the target thread, the data to be processed can be prevented from being tampered, and thread safety is guaranteed. The process does not need locking, thereby ensuring the performance of local cache by further reducing the number of locks and improving the transmission speed and the throughput.

It is understood that, besides writing the data to be processed into the distributed cache, other operations specified by the user may be implemented by the target thread, and the operation on the data to be processed may be different according to different actual scenarios. For example, the target thread performs user operations, reads data to be processed, and writes to redis. The user operation is a method exposed to the user, is used for realizing specific operation and can be realized by using java Stream Custom. After the user operation is completed, the target thread and its internal held variable are released, that is, after the data to be processed in the first blocking queue is written into the distributed cache, the data to be processed in the target thread is released, so as to wait for the second blocking queue to re-copy the subsequent data to be processed into the target thread, and thus, the data cache processing is realized through the circulation.

Step S104: and pointing the pointer to a second blocking queue of the two blocking queues so as to utilize the second blocking queue to continuously store the data to be processed.

After copying the data to be processed to the target thread, the pointer is made to point to the second blocking queue by switching the pointer. In order to ensure that the data to be processed in the first blocking queue and the second blocking queue can be copied to the target thread when the preset condition is met, in an embodiment of the present invention, the storage spaces of the first blocking queue and the second blocking queue are equal, that is, the sizes of the first blocking queue and the second blocking queue are completely the same, so as to ensure the controllability of the data.

Since the data storage amount of the local cache is generally small, the first blocking queue and the second blocking queue may be full of data in a short time, and a pointer switch is required, that is, the pointer switch operation may be frequent. In one embodiment of the invention, in order to ensure that the pointer is switched without errors, the process that the pointer points to the second blocking queue from the first blocking queue is locked, and the lock is released after the pointer points to the second blocking queue. It will be appreciated that the lock is also applied during the process of the pointer being pointed to the first blocking queue by the second blocking queue, and the lock is released after the pointer is pointed to the first blocking queue. In a word, the pointer switching process is locked, and the lock is released after the pointer switching is completed, so that the orderly switching of the pointers is ensured.

The first blocking Queue and/or the second blocking Queue may use a relatively efficient current Linked Queue chain Queue as a storage medium. Since the current Linked Queue itself is realized by using cas and head and tail pointers to ensure thread safety, and the chain structure itself also has the characteristic of fast writing, the comprehensive efficiency is very high.

The following explains the processing method of data caching according to the embodiment of the present invention with reference to the queue and thread diagram shown in fig. 2. As shown in fig. 3, the method mainly includes the following steps:

step S301: and acquiring data to be processed.

Step S302: and determining a first blocking queue pointed by the pointer currently in two blocking queues of a local cache, and writing the data to be processed into the first blocking queue.

When data is to be written into the local cache, for example, when a put method is called to write data into the local cache, the data is written into the current queue, and the current queue is a pointer. As shown in fig. 2, the queue currently pointed to by the pointer is queue 1, i.e. the pending data is written into queue 1. Since queue 1 is a blocking queue, the write process does not require a lock.

Step S303: when the storage amount of the data to be processed in the first blocking queue is not less than a preset space threshold value or the time length for writing the data to be processed into the first blocking queue is not less than the time length threshold value, copying the data to be processed in the first blocking queue to a sub-thread so as to process the data to be processed by using the sub-thread.

Here, the data to be processed may be copied to the inferiority in the sub-thread as shown in fig. 2, and thread security is guaranteed using a thread sealing technique. The child thread would then perform user operations, such as reading data, and writing to redis.

Step S304: and pointing the pointer to a second blocking queue of the two blocking queues so as to continuously write the data to be processed by utilizing the second blocking queue.

As shown in fig. 2, after the data replication is completed, the pointer is switched, i.e. from queue 1 to queue 2, and the sizes of queue 2 and queue 1 are completely consistent. At this time, the new write operation will store the data in queue 2, i.e., the subsequent pending data will be written into queue 2. When the data in the queue 2 meets the preset conditions, the data in the queue 2 is copied to the sub thread as the operation of the queue 1, the pointer is switched back to the queue 1, the data is written in the local cache through the switching of the queues in a reciprocating mode, the queue switching process is locked, the ordered switching of the queues is guaranteed, the performance of the local cache is improved, and the transmission speed and the throughput are improved.

Step S305: and after the data to be processed is written into the distributed cache by utilizing the sub-thread, releasing the data to be processed in the sub-thread.

Here, after the user operation is completed, the child thread and its internal holding variables are released to wait for the next data copy.

According to the data cache processing method, two blocking queues are arranged in a local cache, and the two blocking queues are switched by using the pointing of the pointer. When receiving data to be processed, writing the data to be processed into a first blocking queue pointed by a pointer at present; when the data in the first blocking queue meets the preset condition, the data can be processed through the target thread, and meanwhile, the pointer points to the second blocking queue so as to continuously store the subsequent data to be processed by utilizing the second blocking queue. In the process, due to the adoption of the mode of switching the two blocking queues, the second blocking queue is adopted to continuously write in subsequent data while the first blocking queue copies the data to the target thread, so that the writing waiting time is reduced, and the transmission speed and the throughput are improved. In addition, because the blocking queue is adopted, locking is not needed in the process of writing the data into the blocking queue, and therefore the transmission speed and the throughput are further improved by reducing the number of locks. Furthermore, the data to be processed is sealed in the target thread by adopting a thread sealing technology, so that the process of writing the data into the distributed cache does not need to be locked, the performance of the local cache is ensured by further reducing the number of locks, and the transmission speed and the throughput are improved.

Fig. 4 is a schematic diagram of main modules of a data caching processing device according to an embodiment of the present invention.

As shown in fig. 4, a data cache processing apparatus 400 according to an embodiment of the present invention includes: a data writing module 401, a data copying module 402 and a switching module 403; wherein the content of the first and second substances,

the data writing module 401 is configured to obtain data to be processed, determine a first blocking queue of two blocking queues to which a pointer in a local cache currently points, and write the data to be processed into the first blocking queue; wherein the local cache comprises: the two blocking queues and a pointer for indicating the switching of the two blocking queues;

the data copying module 402 is configured to copy the to-be-processed data in the first blocking queue to a target thread when the to-be-processed data in the first blocking queue meets a preset condition, so as to process the to-be-processed data by using the target thread;

the switching module 403 is configured to point the pointer to a second blocking queue of the two blocking queues, so as to continue writing the to-be-processed data by using the second blocking queue.

In an embodiment of the present invention, the switching module 403 is configured to lock the process that the pointer points to the second congestion queue from the first congestion queue, and release the lock after the pointer points to the second congestion queue.

In an embodiment of the present invention, the data copying module 402 is configured to seal the to-be-processed data inside a target thread by using a thread sealing technique, so as to write the to-be-processed data into a distributed cache by using the target thread.

In an embodiment of the present invention, the data copying module 402 is further configured to release the to-be-processed data in the target thread.

In an embodiment of the present invention, the data writing module 401 is configured to write a duration threshold or a storage space threshold in a callback manner; the preset conditions include: the storage space of the data to be processed is not less than the storage space threshold, or the time length for writing the data to be processed into the first blocking queue is not less than the time length threshold.

In one embodiment of the present invention, the storage space of the first blocking queue and the second blocking queue is equal.

In an embodiment of the present invention, the first blocking Queue and/or the second blocking Queue is a current Linked Queue.

According to the processing device of the data cache, the two blocking queues are arranged in the local cache, and the two blocking queues are switched by using the pointing of the pointer. When receiving data to be processed, writing the data to be processed into a first blocking queue pointed by a pointer at present; when the data in the first blocking queue meets the preset condition, the data can be processed through the target thread, and meanwhile, the pointer points to the second blocking queue so as to continuously store the subsequent data to be processed by utilizing the second blocking queue. In the process, due to the adoption of the mode of switching the two blocking queues, the second blocking queue is adopted to continuously write in subsequent data while the first blocking queue copies the data to the target thread, so that the writing waiting time is reduced, and the transmission speed and the throughput are improved. In addition, because the blocking queue is adopted, locking is not needed in the process of writing the data into the blocking queue, and therefore the transmission speed and the throughput are further improved by reducing the number of locks. Furthermore, the data to be processed is sealed in the target thread by adopting a thread sealing technology, so that the process of writing the data into the distributed cache does not need to be locked, the performance of the local cache is ensured by further reducing the number of locks, and the transmission speed and the throughput are improved.

Fig. 5 shows an exemplary system architecture 500 of a processing apparatus or a processing method of a data cache to which the embodiments of the present invention may be applied.

As shown in fig. 5, the system architecture 500 may include terminal devices 501, 502, 503, a network 504, and a server 505. The network 504 serves to provide a medium for communication links between the terminal devices 501, 502, 503 and the server 505. Network 504 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.

The user may use the terminal devices 501, 502, 503 to interact with a server 505 over a network 504 to receive or send messages or the like. The terminal devices 501, 502, 503 may have various communication client applications installed thereon, such as a shopping application, a web browser application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.

The terminal devices 501, 502, 503 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.

The server 505 may be a server that provides various services, such as a background management server that supports shopping websites browsed by users using the terminal devices 501, 502, 503. The background management server may analyze and perform other processing on the received data such as the product information query request, and feed back a processing result (e.g., target push information and product information) to the terminal device.

It should be understood that the number of terminal devices, networks, and servers in fig. 5 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.

Referring now to FIG. 6, a block diagram of a computer system 600 suitable for use with a terminal device implementing an embodiment of the invention is shown. The terminal device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.

As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.

The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.

In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 601.

It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes a data writing module, a data copying module, and a switching module. The names of these modules do not in some cases constitute a definition of the module itself, for example, the data writing module may also be described as a "module writing data to be processed to the first blocking queue".

As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: acquiring data to be processed; determining a first blocking queue of two blocking queues currently pointed by the pointer, and writing the data to be processed into the first blocking queue; the local cache comprises two blocking queues and a pointer for indicating the switching of the two blocking queues; when the data to be processed in the first blocking queue meets a preset condition, copying the data to be processed in the first blocking queue to a target thread so as to process the data to be processed by utilizing the target thread; and pointing the pointer to a second blocking queue of the two blocking queues so as to continuously write the data to be processed by utilizing the second blocking queue.

According to the technical scheme of the embodiment of the invention, two blocking queues are arranged in the local cache, and the two blocking queues are switched by using the pointing direction of the pointer. When receiving data to be processed, writing the data to be processed into a first blocking queue pointed by a pointer at present; when the data in the first blocking queue meets the preset condition, the data can be processed through the target thread, and meanwhile, the pointer points to the second blocking queue so as to continuously store the subsequent data to be processed by utilizing the second blocking queue. In the process, due to the adoption of the mode of switching the two blocking queues, the second blocking queue is adopted to continuously write in subsequent data while the first blocking queue copies the data to the target thread, so that the writing waiting time is reduced, and the transmission speed and the throughput are improved. In addition, because the blocking queue is adopted, locking is not needed in the process of writing the data into the blocking queue, and therefore the transmission speed and the throughput are further improved by reducing the number of locks. Furthermore, the data to be processed is sealed in the target thread by adopting a thread sealing technology, so that the process of writing the data into the distributed cache does not need to be locked, the performance of the local cache is ensured by further reducing the number of locks, and the transmission speed and the throughput are improved.

The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:L2P数据缓存方法、装置、可读存储介质及电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类