Data processing method, apparatus, device, medium, and program product

文档序号:1952027 发布日期:2021-12-10 浏览:3次 中文

阅读说明:本技术 数据处理方法、装置、设备、介质及程序产品 (Data processing method, apparatus, device, medium, and program product ) 是由 孟可 彭安 钱熙 于 2021-02-23 设计创作,主要内容包括:本申请提供了一种数据处理方法、装置、设备、介质及程序产品,通过获取第一缓存区的容量信息以及数据使用信息,第一缓存区用于存储数据生成端产生的数据,容量信息用于表示容量的动态使用情况,数据使用信息用于表示数据使用端中各个使用节点对数据的使用情况;根据容量信息以及数据使用信息,将满足预设转移条件的数据从第一缓存区转移到历史数据存储区。解决了现有技术中数据生成端以及数据使用端之间的数据传输受制于数据处理能力最若的数据使用节点,造成系统整体数据处理能力较低的技术问题。同时可以从历史数据存储区调用数据,使得新增数据使用节点更为简便,也提高了各个数据使用节点的数据安全性。(The application provides a data processing method, a device, equipment, a medium and a program product, wherein capacity information and data use information of a first cache area are obtained, the first cache area is used for storing data generated by a data generation end, the capacity information is used for representing dynamic use conditions of capacity, and the data use information is used for representing use conditions of each use node in a data use end on the data; and transferring the data meeting the preset transfer condition from the first cache region to the historical data storage region according to the capacity information and the data use information. The technical problem that in the prior art, data transmission between a data generating end and a data using end is limited by a data using node with the most data processing capacity, so that the overall data processing capacity of the system is low is solved. Meanwhile, data can be called from the historical data storage area, so that newly-added data using nodes are simpler and more convenient, and the data safety of each data using node is also improved.)

1. A data processing method, comprising:

acquiring capacity information and data use information of a first cache region, wherein the first cache region is used for storing data generated by a data generation end, the capacity information is used for representing the dynamic use condition of capacity, and the data use information is used for representing the use condition of each use node in the data use end to the data;

and transferring the data meeting the preset transfer condition from the first cache region to a historical data storage region according to the capacity information and the data use information.

2. The data processing method according to claim 1, wherein the transferring data satisfying a preset transfer condition from the first cache area to a historical data storage area according to the capacity information and the data usage information comprises:

determining remaining capacity information according to the capacity information, the remaining capacity information including: a remaining capacity, and/or a rate of change of the remaining capacity;

and if the residual capacity is less than or equal to a preset capacity threshold value and/or the change rate is greater than or equal to a preset change threshold value, transferring the data to be transferred from the first cache region to the historical data storage region according to the data use information and a preset period.

3. The data processing method according to claim 1, wherein the transferring data satisfying a preset transfer condition from the first cache area to a historical data storage area according to the capacity information and the data usage information comprises:

determining a proportion of used data based on the data usage information, the used data being data used by at least one of the nodes;

and when the proportion is larger than or equal to a preset proportion, if the using times of the used data are larger than or equal to the preset using times, transferring the data to be transferred from the first cache region to a historical data storage region according to the data using information.

4. A data processing method according to claim 2 or 3, wherein the data to be transferred comprises used data, the used data being data used by at least one of the nodes.

5. The data processing method according to claim 2 or 3, wherein the data to be transferred comprises data in all of the first buffer areas.

6. The data processing method according to claim 2 or 3, wherein the transferring data satisfying a preset transfer condition from the first cache region to a history data storage region according to the capacity information and the data usage information further comprises:

and if all the data in the first cache region are used by all the data using nodes, transferring all the data to the historical data storage region.

7. The data processing method of claim 1, further comprising:

acquiring a historical data use request of a data use node in a data use end;

determining target historical data in the historical data storage area according to the historical data use request;

and sending the target historical data to the data using node.

8. The data processing method of claim 7, wherein the sending the target history data to the data consumer node comprises:

and if the residual capacity of the first cache region meets the storage requirement of the target historical data, storing the target historical data into the first cache region for the data using node to use.

9. The data processing method according to claim 7 or 8, wherein the sending the target history data to the data usage node comprises:

and if the residual capacity of the first cache region does not meet the storage requirement of the target historical data, storing the target historical data in a second cache region for the data using node to use.

10. The data processing method of claim 7, wherein the sending the target history data to the data consumer node comprises:

and storing the target historical data to a second cache region for the data using node to use.

11. A data processing apparatus, comprising:

the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring capacity information and data use information of a first cache region, the first cache region is used for storing data generated by a data generation end, the capacity information is used for representing dynamic use conditions of capacity, and the data use information is used for representing use conditions of each use node in the data use end on the data;

and the processing module is used for transferring the data meeting the preset transfer condition from the first cache region to the historical data storage region according to the capacity information and the data use information.

12. A data processing system, comprising: the data caching method comprises a data generating end, a data caching end and a data using end; wherein the content of the first and second substances,

the data generating end is used for generating data to be used and storing the data to be used into the data caching end;

the data cache end comprises a first cache region and a historical data storage region, and the historical data storage region is used for storing data which meets preset transfer conditions in the first cache region;

the data using end comprises at least one data using node, and the data using node is used for calling data in the first cache region or the historical data storage region;

the data generating end or the data caching end is further used for realizing the data processing method of any one of claims 1 to 10.

13. An electronic device, comprising:

a processor; and the number of the first and second groups,

a memory for storing a computer program for the processor;

wherein the processor is configured to perform the data processing method of any one of claims 1 to 10 via execution of the computer program.

14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the data processing method of any one of claims 1 to 10.

15. A computer program product comprising a computer program, characterized in that the computer program realizes the data processing method of any one of claims 1 to 10 when executed by a processor.

Technical Field

The present application relates to the field of computer data processing, and in particular, to a data processing method, apparatus, device, medium, and program product.

Background

When the computer data is processed, because the data using end and the data generating end are often not synchronized in time and the processing speeds of the two ends are not consistent, a buffer area is needed for data buffering at the moment, and asynchronous data transmission processing is realized.

With the continuous expansion of data consumer networks, there are a plurality of consumer nodes that all need to use data in the cache area, but the processing capacity of each consumer node is different. This makes the data processing efficiency of the entire system subject to the problem of the use node with the lowest processing rate, or simply the shortest board.

Under the condition that the capacity of the buffer area is limited, the data generating end and the data using end are not limited by the node with the weakest data processing capacity, so that the technical problem to be solved urgently is solved.

Disclosure of Invention

The application provides a data processing method, a data processing device, a data processing apparatus, a data processing medium and a program product, which are used for solving the technical problem that in the prior art, data transmission between a data generation end and a data use end is limited by a data use node with the best data processing capacity, so that the overall data processing capacity of a system is low.

In a first aspect, the present application provides a data processing method, including:

acquiring capacity information and data use information of a first cache region, wherein the first cache region is used for storing data generated by a data generation end, the capacity information is used for representing the dynamic use condition of capacity, and the data use information is used for representing the use condition of each use node in the data use end to the data;

and transferring the data meeting the preset transfer condition from the first cache region to a historical data storage region according to the capacity information and the data use information.

In a possible design, the pre-set transfer condition includes a first transfer condition, where the first transfer condition is used to meet a data production efficiency requirement of a data generation end, and correspondingly, the transferring, according to the capacity information and the data usage information, the data meeting the pre-set transfer condition from the first cache area to a historical data storage area includes:

determining remaining capacity information according to the capacity information, the remaining capacity information including: a remaining capacity, and/or a rate of change of the remaining capacity;

and if the residual capacity is less than or equal to a preset capacity threshold value and/or the change rate is greater than or equal to a preset change threshold value, transferring the data to be transferred from the first cache region to the historical data storage region according to the data use information and a preset period.

In one possible design, the preset transfer condition includes a second transfer condition, where the second transfer condition is used to satisfy a data usage requirement of a high-speed data usage node, and a data usage rate of the high-speed data usage node is greater than or equal to a preset rate, and correspondingly, the transferring the data satisfying the preset transfer condition from the first cache area to a historical data storage area according to the capacity information and the data usage information includes:

determining a proportion of used data based on the data usage information, the used data being data used by at least one of the nodes;

and when the proportion is larger than or equal to a preset proportion, if the using times of the used data are larger than or equal to the preset using times, transferring the data to be transferred from the first cache region to a historical data storage region according to the data using information.

Optionally, the data to be transferred includes used data, and the used data is data used by at least one node.

Optionally, the data to be transferred includes data in all the first buffer areas.

In one possible design, the preset transfer condition includes a third transfer condition, where the third transfer condition is used to clean the first cache area in time after data is used by all the using nodes, and correspondingly, the data meeting the preset transfer condition is transferred from the first cache area to the historical data storage area according to the capacity information and the data use information, and the method further includes:

and if all the data in the first cache region are used by all the data using nodes, transferring all the data to the historical data storage region.

In one possible design, the data processing method further includes:

acquiring a historical data use request of a data use node in a data use end;

determining target historical data in the historical data storage area according to the historical data use request;

and sending the target historical data to the data using node.

In one possible design, the sending the target history data to the data consumer node includes:

and if the residual capacity of the first cache region meets the storage requirement of the target historical data, storing the target historical data into the first cache region for the data using node to use.

Optionally, the sending the target history data to the data usage node includes:

and if the residual capacity of the first cache region does not meet the storage requirement of the target historical data, storing the target historical data in a second cache region for the data using node to use.

In one possible design, the sending the target history data to the data consumer node includes:

and storing the target historical data to a second cache region for the data using node to use.

In a second aspect, the present application provides a data processing apparatus comprising:

the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring capacity information and data use information of a first cache region, the first cache region is used for storing data generated by a data generation end, the capacity information is used for representing dynamic use conditions of capacity, and the data use information is used for representing use conditions of each use node in the data use end on the data;

and the processing module is used for transferring the data meeting the preset transfer condition from the first cache region to the historical data storage region according to the capacity information and the data use information.

In a possible design, the preset transfer condition includes a first transfer condition, where the first transfer condition is used to meet a data production efficiency requirement of a data generation end, and correspondingly, the processing module is specifically configured to:

determining remaining capacity information according to the capacity information, the remaining capacity information including: a remaining capacity, and/or a rate of change of the remaining capacity;

and if the residual capacity is less than or equal to a preset capacity threshold value and/or the change rate is greater than or equal to a preset change threshold value, transferring the data to be transferred from the first cache region to the historical data storage region according to the data use information and a preset period.

In one possible design, the preset transition condition includes a second transition condition, where the second transition condition is used to satisfy a data usage requirement of a high-speed data usage node, and a data usage rate of the high-speed data usage node is greater than or equal to a preset rate, and correspondingly, the processing module is specifically configured to:

determining a proportion of used data based on the data usage information, the used data being data used by at least one of the nodes;

and when the proportion is larger than or equal to a preset proportion, if the using times of the used data are larger than or equal to the preset using times, transferring the data to be transferred from the first cache region to a historical data storage region according to the data using information.

Optionally, the data to be transferred includes used data, and the used data is data used by at least one node.

Optionally, the data to be transferred includes data in all the first buffer areas.

In a possible design, the preset transfer condition includes a third transfer condition, where the third transfer condition is used to clean the first cache area in time after the data is used by all the using nodes, and correspondingly, the processing module is specifically configured to:

and if all the data in the first cache region are used by all the data using nodes, transferring all the data to the historical data storage region.

In a possible design, the obtaining module is further configured to obtain a historical data usage request of a data usage node in the data usage end;

the processing module is further used for determining target historical data in the historical data storage area according to the historical data use request;

the processing module is further configured to send the target history data to the data usage node.

In one possible design, the processing module is further configured to:

and if the residual capacity of the first cache region meets the storage requirement of the target historical data, storing the target historical data into the first cache region for the data using node to use.

Optionally, the processing module is further configured to send the target history data to the data using node, and specifically includes:

and if the residual capacity of the first cache region does not meet the storage requirement of the target historical data, storing the target historical data in a second cache region for the data using node to use.

In a possible design, the processing module is further configured to send the target history data to the data usage node, and specifically includes:

and storing the target historical data to a second cache region for the data using node to use.

In a third aspect, the present application provides a data processing system comprising: the data caching method comprises a data generating end, a data caching end and a data using end; wherein the content of the first and second substances,

the data generating end is used for generating data to be used and storing the data to be used into the data caching end;

the data cache terminal comprises a first cache region and a historical data storage region, wherein the historical data storage region is used for storing data which meets a preset transfer condition in the first cache region, and optionally, the data cache terminal further comprises a second cache region, and the second cache region is used for storing target historical data from the historical data storage region to the second cache region for the data using node to use when the data using node requests to use the historical data;

the data using end comprises at least one data using node, and the data using node is used for calling data in the first cache region or the historical data storage region;

the data generating end or the data caching end is further configured to implement any one of the possible data processing methods provided by the first aspect.

In a fourth aspect, the present application provides an electronic device, comprising:

a memory for storing program instructions;

and the processor is used for calling and executing the program instructions in the memory to execute any one of the possible data processing methods provided by the first aspect.

In a fifth aspect, the present application provides a storage medium, wherein a computer program is stored in the storage medium, and the computer program is used to execute any one of the possible data processing methods provided in the first aspect.

In a sixth aspect, the present application further provides a computer program product comprising a computer program, which when executed by a processor, implements any one of the possible data processing methods provided in the first aspect.

The application provides a data processing method, a device, equipment, a medium and a program product, wherein capacity information and data use information of a first cache area are obtained, the first cache area is used for storing data generated by a data generation end, the capacity information is used for representing dynamic use conditions of capacity, and the data use information is used for representing use conditions of each use node in a data use end on the data; and transferring the data meeting the preset transfer condition from the first cache region to the historical data storage region according to the capacity information and the data use information. The technical problem that in the prior art, data transmission between a data generating end and a data using end is limited by a data using node with the most data processing capacity, so that the overall data processing capacity of the system is low is solved. Meanwhile, data can be called from the historical data storage area, so that newly-added data using nodes are simpler and more convenient, and the data safety of each data using node is also improved.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.

Fig. 1 is a schematic view of an application scenario of a data processing method according to an embodiment of the present application;

fig. 2 is a schematic flow chart of a data processing method provided in the present application;

fig. 3 is a schematic flow chart of another data processing method according to an embodiment of the present application;

fig. 4 is a schematic flowchart of another data processing method provided in the embodiment of the present application;

fig. 5 is a schematic flowchart of another data processing method according to an embodiment of the present application;

fig. 6 is a schematic flowchart of yet another data processing method provided in the embodiment of the present application;

fig. 7 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application;

fig. 8 is a schematic structural diagram of an electronic device provided in the present application.

With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, including but not limited to combinations of embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any inventive step are within the scope of the present application.

The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.

With the continuous development of computer processing technology, the processing of data is not completed by only one processing terminal, but a multi-terminal distributed processing structure is formed. Even in an electronic device, the processing of data is often performed by a plurality of chips. However, no matter how complex the structure is, the structure can be finally divided into two types, or two ends, namely a data generating end and a data using end.

In order to fully exert the respective best performance of the electronic devices at the two ends, the data generating end and the data using end are operated independently, which results in that the data generating end and the data using end are not operated synchronously, and in any end, a plurality of different nodes may be included, for example, the data using end may include a plurality of data using nodes. The data transmission and processing capabilities of different data using nodes are different, so that the data processing capabilities of the different data using nodes are strong or weak.

In order to solve the problem of time dimension asynchronism between the data generation end and the data using end, a buffer area or a buffer area is introduced for temporarily storing data generated by the data generation end, because the data generation end generally has faster data processing capacity. For example, in a computer, the data processing capability of a central processing unit CPU is far beyond that of other devices, and therefore, a memory is introduced for buffering.

However, since the capacity of the buffer is limited, if the data using end does not use the data in the buffer in time, the buffer will be full, and at this time, the data generating end can only stop generating data.

That is to say, although the buffer can adjust the problem of time dimension asynchronism between the data generating end and the data using end, the buffer is limited by its own capacity, and cannot be made infinitely large. There is a trend toward greater and greater differences in processing speeds between the data generation side and the data usage side. Therefore, it is common practice to enlarge the capacity of the buffer area, for example, the buffer memory of the mobile phone is upgraded from 1G to 2G, 4G, 8G, etc., but the increase may be limited by the space volume of the device itself or the power consumption. This is clearly not a long standing meter.

This induces the prior art to increase the processing capability of the data using end in turn to reduce the data asynchronism difference between the two. However, for some application scenarios, there is an asynchronous difference that cannot be eliminated when the application itself is applied. For example, servers or databases distributed in various places all need to use data to the central server or the central database, that is, the data generation end only has one node, but the data usage end has a plurality of data usage nodes, and in order to ensure the data consistency of the data usage nodes, the data generated by the data generation end must wait until all the usage nodes use the data, and then the data can be deleted from the cache area. This causes a problem that the data processing capacity or data processing efficiency of the entire system is limited by the data use node with the weakest processing capacity, which is referred to as the shortest board.

For the problem of the shortest board, it is a common practice to allocate a data buffer area to each data usage node, that is, the data usage node and the data buffer area form a one-to-one exclusive relationship. However, the configuration of this method is costly, and if there is a new data using node in the subsequent process, a data buffer needs to be added at the same time, which is very complex, and the whole system is modified greatly at a higher cost, and how to obtain the same historical data by the new node becomes a new problem to be faced.

Therefore, for the prior art, there are two buffer forms, one is in the form of a single buffer area, and the other is in the form of a multi-buffer area corresponding to the number of used nodes, for example, a ring buffer area generated according to the number of used nodes, where each unit in the ring buffer area is used to store the address of the data queue corresponding to each data used node. And the data generation end sends the data to the data queue corresponding to each data using node, and then the data using nodes perform data processing.

The two cache forms have the following defects:

1. the single buffer area has the problem of the shortest board for data synchronization of the data generation end and the data use end, namely, the data use node which uses or processes the data fastest is limited by the data use node which uses or processes the data slowest, and finally, the data processing efficiency of the whole system is consistent with the data use or processing speed of the slowest data use node. And the speed of generating data at the data generating end is also determined by the slowest data using node. When the slowest and slowest data using node does not use or process the data in the limited buffer, the data in the buffer is not cleared, so that the data generating end cannot continue to generate or store the data in the limited buffer, and the data using node with the slowest data using or processing speed cannot obtain new data. In summary, the data processing capacity or data processing efficiency of the entire system is severely limited.

2. When the data of the data using node is in error or is deleted by mistake, the data in the cache region is already deleted, the data using node cannot directly obtain the data from the cache region, the data using node needs to apply for generation again from the data generating terminal, and the computing resource of the data generating terminal is wasted.

3. When the new data use node is added, the historical data synchronization can not be carried out, the data generation end needs to be applied for generation again, and the computing resource of the data generation end is wasted.

In general, the existing cache form has the technical problem that the data processing efficiency or the data processing capacity of the whole system is low.

The data processing method provided by the application aims to solve the technical problems in the prior art.

The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.

Fig. 1 is a schematic view of an application scenario of a data processing method according to an embodiment of the present application. As shown in fig. 1, when the MySQL database performs unit data synchronization, one source library corresponds to a plurality of target libraries, that is, one data generation end 101 corresponds to a plurality of data usage nodes of the data usage end 102. Because of the limitation of network bandwidth, hardware and other conditions between different data using nodes, in the process of data synchronization of databases between different data using nodes, the capability of each database for processing or using source data is different, and the database with the lowest processing or using capability seriously affects the capability of the source database, i.e., the capability of the data generating end 101 for producing data. For this reason, in the embodiment of the present application, the data cache terminal 103 is provided with the first cache region and the historical data storage region, the first cache region may store data meeting the predetermined condition into the historical data storage region at regular time, and when the data application used by the data usage node with lower processing or usage capability is determined as historical data, the target historical data required by the data usage node may be sent to the data usage node from the historical data storage region.

It should be noted that, in this embodiment, the first cache region may be a storage unit for performing fast data reading, such as a flash memory, and the historical data storage region may be a storage region on a mechanical hard disk with a slower read-write speed, or a storage region of a solid state hard disk with a faster read-write speed, or the like.

By means of timing cleaning, the data in the first cache region are transferred to the historical data storage region, precious capacity of the first cache region can be recovered at regular time, the data generation end can keep an efficient working state, data are continuously generated, and work is not forced to stop due to the fact that the capacity of the first cache region is full. In addition, for the data using node with stronger processing or using capability, new data can be quickly acquired without being limited by the data using node with weaker processing or using capability, and the problem of the shortest board is solved. For data using nodes with weak processing or using capability, data which cannot be obtained from the first cache region in time can still be obtained from the historical data storage region, and the data using nodes do not need to occupy a data generating end to regenerate. Furthermore, if a new data using node is added, or the data using node has an error or is deleted by mistake, the historical data can be directly acquired from the historical data storage area without occupying the data generating end to regenerate the corresponding data. The data processing method of the embodiment of the application solves the problem of the shortest board, simultaneously solves the problems of high cost or complex operation when the nodes are newly added and used, improves the data safety of each data using node, achieves a remarkable progress effect compared with the prior art, and greatly improves the data processing capacity or the data processing efficiency of the whole data processing system.

The detailed steps of the data processing method provided by the present application are described below with reference to the accompanying drawings.

Fig. 2 is a schematic flow chart of a data processing method provided in the present application. As shown in fig. 2, the specific steps of the data processing method include:

s201, acquiring the capacity information and the data use information of the first buffer area.

In this step, the first buffer is used to store data generated by the data generating end, the capacity information is used to indicate dynamic usage of capacity, and the data usage information is used to indicate usage of the data by each usage node in the data usage end.

Specifically, the data generation end or the data cache end monitors the capacity dynamic use condition of the first cache region in real time, such as the used capacity, the remaining capacity, the dynamic change rate of the used capacity, the dynamic change rate of the remaining capacity, and the like. The capacity information may be a criterion for determining whether the data generation end needs to stop generating or storing new data, and may also be a trigger signal for the timing cleaning mode of the first buffer area, and when the remaining capacity reaches a preset remaining threshold, or when the used capacity reaches a preset limit, the timing cleaning mode is triggered. The timing cleaning mode is to transfer the data meeting the preset condition in the first cache region, such as the data used by the data using node of at least one data using end, to the historical data storage region.

The data usage information may be used to determine which data may be transferred, which reflects the usage rate of the data usage node for the data, and for example, the number of times that the same data is called by different data usage nodes may be recorded as the data usage information, and when the number of times of usage exceeds half of the total number of the data usage nodes, the data may be transferred to the history data storage area.

S202, transferring the data meeting the preset transfer condition from the first cache region to the historical data storage region according to the capacity information and the data use information.

In this step, the preset transition conditions at least include: a first transition condition, a second transition condition, and a third transition condition.

Specifically, the first transfer condition is used for meeting the data production efficiency requirement of the data generation end. When the used capacity of the first cache region reaches a preset limit value or the residual capacity reaches a preset residual value, partial data or all data in the first cache region can be transferred to the historical data storage region.

In a possible design, the data generating end may also be combined with the data generating rate to determine whether to transfer part of or all of the data in the first buffer area to the historical data storage area.

Optionally, the production efficiency of the data generated by the data generating end may also be judged by the change rate of the used capacity or the remaining capacity, and when the change rate is greater than a preset threshold, it may be judged that the data generating end rapidly generates a large amount of data, so that the timing cleaning mode of the first buffer area may be started, and part of or all of the data in the first buffer area may be transferred to the historical data storage area according to a preset time period.

The second transfer condition is used for meeting the data use requirement of the high-speed data use node, and the data use rate of the high-speed data use node is greater than or equal to the preset rate. The data use information mainly plays a role in this time, and for each data use node, only unused data in the first cache region is valid data, and the valid data proportion of the high-speed data use node is small, or the proportion of used data is high, so that when the preset proportion is reached, the data processing efficiency of the high-speed data use node is limited. For this reason, by detecting the proportion of the used data of each data usage node, it is possible to transfer the data that has been classified as used data among a preset number of data usage nodes into the history data storage area.

For example, the data using end has 3 data using nodes, and a certain data and used by two of the data using nodes can be transferred to the historical data storage area.

The third transfer condition is used for cleaning the first cache area in time after the data is used by all the using nodes. That is, through the data use information, when it is detected that any data in the first cache region has been used by all the data use nodes, the data can be transferred to the historical data storage region, and further, all the data can be transferred to the historical data storage region.

It should be noted that the data transferred to the history data storage area can still be called by any data usage node in the data usage end. Even if the data usage node is newly added, the history data in the history data storage area may be directly transmitted to the newly added data usage node in order to enable the data usage node to use the history data.

Therefore, by using the data processing method provided by the embodiment of the application, when a new data using node is added, the data caching end and the data generating end do not need to be changed, and compared with the prior art, the data processing method is simple and convenient to introduce and low in cost.

Meanwhile, any data using node can call the historical data in the historical data storage area, so that when the data of the data using node is wrong or is deleted by mistake, the data can be called by reading the historical data storage area again without regenerating the data generating end, the data safety of the data using node is improved, and meanwhile, the computing resource consumption of the data generating end is saved.

The embodiment provides a data processing method, which includes that capacity information and data use information of a first cache area are obtained, the first cache area is used for storing data generated by a data generation end, the capacity information is used for representing dynamic use conditions of capacity, and the data use information is used for representing use conditions of each use node in a data use end on the data; and transferring the data meeting the preset transfer condition from the first cache region to the historical data storage region according to the capacity information and the data use information. The technical problem that in the prior art, data transmission between a data generating end and a data using end is limited by a data using node with the most data processing capacity, so that the overall data processing capacity of the system is low is solved. Meanwhile, data can be called from the historical data storage area, so that newly-added data using nodes are simpler and more convenient, and the data safety of each data using node is also improved.

For ease of understanding, the following description will be made by taking one example of the first transfer condition among the preset transfer conditions.

Fig. 3 is a schematic flow chart of another data processing method according to an embodiment of the present application. As shown in fig. 3, the data processing method includes the specific steps of:

s301, capacity information and data use information of the first buffer area are obtained.

In this step, the first buffer is used to store data generated by the data generating end, the capacity information is used to indicate dynamic usage of capacity, and the data usage information is used to indicate usage of the data by each usage node in the data usage end.

And S302, determining the residual capacity information according to the capacity information.

In this step, the remaining capacity information includes: a remaining capacity, and/or a rate of change of the remaining capacity.

The specific remaining capacity information may be represented in at least three ways:

the first is expressed in terms of remaining capacity;

the second is represented by the rate of change of the remaining capacity;

the third is to use the remaining capacity and the rate of change of the remaining capacity together.

And S303, judging whether the residual capacity information meets a first transfer condition.

In this step, if the result is satisfied, the process proceeds to S304, and if the result is not satisfied, the process returns to S301.

In this embodiment, if the remaining capacity information is represented by a remaining capacity, the first transfer condition is that the remaining capacity is less than or equal to a preset capacity threshold;

if the remaining capacity information is represented by a change rate of the remaining capacity, the first transfer condition is that the change rate is greater than or equal to a preset change threshold;

if the remaining capacity information is collectively represented by the remaining capacity and the rate of change of the remaining capacity, the first transition condition is that the remaining capacity is less than or equal to a preset capacity threshold and the rate of change is greater than or equal to a preset change threshold.

S304, transferring the data to be transferred from the first cache region to the historical data storage region according to the data use information and the preset period.

In this step, the data usage information is used to judge or identify the data to be transferred.

In a possible embodiment, the data to be transferred includes used data, which is data used by at least one of the nodes.

In another possible implementation, the data to be transferred includes data in all the first buffer areas.

In another possible implementation manner, the data to be transferred includes data with usage times reaching a preset usage time, for example, data with usage times greater than or equal to half of the total number of data usage nodes.

Specifically, the preset period may be 30s, and the transfer is performed every 30 seconds, so that the first buffer area has sufficient capacity, and the data generation end can maintain high-efficiency data production efficiency.

The embodiment provides a data processing method, which includes that capacity information and data use information of a first cache area are obtained, the first cache area is used for storing data generated by a data generation end, the capacity information is used for representing dynamic use conditions of capacity, and the data use information is used for representing use conditions of each use node in a data use end on the data; and transferring the data meeting the preset transfer condition from the first cache region to the historical data storage region according to the capacity information and the data use information. The technical problem that in the prior art, data transmission between a data generating end and a data using end is limited by a data using node with the most data processing capacity, so that the overall data processing capacity of the system is low is solved. Meanwhile, data can be called from the historical data storage area, so that newly-added data using nodes are simpler and more convenient, and the data safety of each data using node is also improved.

For ease of understanding, the following description will be made by taking one example of the second transition condition among the preset transition conditions.

Fig. 4 is a schematic flowchart of another data processing method according to an embodiment of the present application. As shown in fig. 4, the data processing method includes the specific steps of:

s401, acquiring the capacity information and the data use information of the first buffer area.

In this step, the first buffer is used to store data generated by the data generating end, the capacity information is used to indicate dynamic usage of capacity, and the data usage information is used to indicate usage of the data by each usage node in the data usage end.

S402, determining the proportion of used data according to the data use information.

In this step, the used data is data used by at least one of the nodes, and the proportion of the used data is the ratio of the total number of the used data to the total data amount in all the first buffers.

And S403, judging whether the proportion is larger than or equal to a preset proportion.

In this step, if yes, S404 is executed, otherwise, the process returns to re-execution of S401.

Specifically, the preset ratio may be any ratio between 0 and 1, and a person skilled in the art may select the ratio according to actual needs. The selection principle may be selected according to the processing rate of the data using node with the highest data processing rate among the data using nodes, or according to the average data processing rate of all the data using nodes.

S404, judging whether the using times of the used data are larger than or equal to the preset using times.

In this step, if yes, S405 is executed, otherwise, S401 is executed again.

Specifically, the preset number of times of use may be half of the total number of data use nodes, and it can be understood that a person skilled in the art may set the preset number of times according to the needs of an actual application scenario. The preset number of times of use may also be set to a dynamic value, which changes with the average data processing rate of all data usage nodes or the processing rate of the data usage node with the fastest data processing rate.

S405, transferring the data to be transferred from the first cache region to a historical data storage region according to the data use information.

In this step, the data usage information is used to judge or identify the data to be transferred.

In a possible embodiment, the data to be transferred includes used data, which is data used by at least one of the nodes.

In another possible implementation, the data to be transferred includes data in all the first buffer areas.

In another possible implementation manner, the data to be transferred includes data with usage times reaching a preset usage time, for example, data with usage times greater than or equal to half of the total number of data usage nodes.

The embodiment provides a data processing method, which includes that capacity information and data use information of a first cache area are obtained, the first cache area is used for storing data generated by a data generation end, the capacity information is used for representing dynamic use conditions of capacity, and the data use information is used for representing use conditions of each use node in a data use end on the data; and transferring the data meeting the preset transfer condition from the first cache region to the historical data storage region according to the capacity information and the data use information. The technical problem that in the prior art, data transmission between a data generating end and a data using end is limited by a data using node with the most data processing capacity, so that the overall data processing capacity of the system is low is solved. Meanwhile, data can be called from the historical data storage area, so that newly-added data using nodes are simpler and more convenient, and the data safety of each data using node is also improved.

For ease of understanding, the following description will be given by taking one example of the third transition condition among the preset transition conditions.

Fig. 5 is a flowchart illustrating another data processing method according to an embodiment of the present application. As shown in fig. 5, the data processing method includes the specific steps of:

s501, acquiring the capacity information and the data use information of the first buffer area.

In this step, the first buffer is used to store data generated by the data generating end, the capacity information is used to indicate dynamic usage of capacity, and the data usage information is used to indicate usage of the data by each usage node in the data usage end.

S502, judging whether all the data in the first cache region are used by all the data using nodes.

In this step, if yes, S503 is executed, otherwise, the process returns to S501 to perform the recycling determination.

S503, all data are transferred to the historical data storage area.

It should be noted that, in this embodiment, when the processing capabilities of the data usage nodes are not very different, and the data generation end does not generate a large amount of data, after the data in the first cache region is used, the data is not directly deleted as in the prior art, but is transferred to the history data storage region, so as to call the history data for the data usage nodes and call the history data for the newly added data usage node, without occupying the computing resources of the data generation end to generate again.

The embodiment provides a data processing method, which includes that capacity information and data use information of a first cache area are obtained, the first cache area is used for storing data generated by a data generation end, the capacity information is used for representing dynamic use conditions of capacity, and the data use information is used for representing use conditions of each use node in a data use end on the data; and transferring the data meeting the preset transfer condition from the first cache region to the historical data storage region according to the capacity information and the data use information. The technical problem that in the prior art, data transmission between a data generating end and a data using end is limited by a data using node with the most data processing capacity, so that the overall data processing capacity of the system is low is solved. Meanwhile, data can be called from the historical data storage area, so that newly-added data using nodes are simpler and more convenient, and the data safety of each data using node is also improved.

It should be noted that the three embodiments shown in fig. 3 to fig. 5 can be implemented as three parallel implementations, or parallel threads, to implement the data processing method of the present application.

On the basis of the above embodiments, in order to further explain how the data using node invokes the data in the history data storage area, the following explanation is made with respect to the embodiment shown in fig. 6.

Fig. 6 is a schematic flowchart of yet another data processing method according to an embodiment of the present application. As shown in fig. 6, the data processing method includes the specific steps of:

s601, obtaining a historical data using request of a data using node in a data using end.

S602, determining target historical data in the historical data storage area according to the historical data use request.

S603, sending the target historical data to the data using node.

In this step, there may be various embodiments for sending the target history data to the data usage node.

In a possible implementation manner, if the remaining capacity of the first cache region meets the storage requirement of the target history data, the target history data is stored in the first cache region to be used by the data using node.

In another possible implementation manner, if the remaining capacity of the first cache region does not meet the storage requirement of the target history data, the target history data is stored in a second cache region for the data using node to use.

In yet another possible implementation, the target history data is stored to a second cache region for use by the data consuming node.

It should be noted that, in this embodiment, the second cache region is a storage device capable of fast reading and writing, such as a memory or a flash memory, and may also be a solid state disk, which is of the same type as the first cache region.

It should be noted that, in this embodiment, it may be any data using node in the data using end that executes sending the historical data using request, because in the data using process, data may be mistaken or deleted by mistake due to various reasons, at this time, only the data processing method of this embodiment is executed, and the historical data is called from the historical data storage area, so that the data can be recovered, the data security of the data using node is improved, meanwhile, the data generating end is prevented from calculating again to generate the historical data, the computing resource of the data generating end is saved, and the data processing efficiency and capability of the system are improved.

In addition, for the newly added data usage node, the historical data may also be acquired according to the data processing method shown in fig. 6, so that each data usage node can obtain the same data, and the data generation end is not required to regenerate.

The embodiment provides a data processing method, which includes that capacity information and data use information of a first cache area are obtained, the first cache area is used for storing data generated by a data generation end, the capacity information is used for representing dynamic use conditions of capacity, and the data use information is used for representing use conditions of each use node in a data use end on the data; transferring the data meeting the preset transfer condition from the first cache region to the historical data storage region according to the capacity information and the data use information; and then acquiring a historical data use request of a data use node in the data use end, determining target historical data in a historical data storage area according to the historical data use request, and finally sending the target historical data to the data use node. The technical problem that in the prior art, data transmission between a data generating end and a data using end is limited by a data using node with the most data processing capacity, so that the overall data processing capacity of the system is low is solved. Meanwhile, data can be called from the historical data storage area, so that newly-added data using nodes are simpler and more convenient, and the data safety of each data using node is also improved.

Fig. 7 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application. The data processing device 700 may be implemented by software, hardware, or a combination of both.

As shown in fig. 7, the data processing apparatus 700 includes:

an obtaining module 701, configured to obtain capacity information and data usage information of a first cache region, where the first cache region is used to store data generated by a data generating end, the capacity information is used to indicate a dynamic usage situation of capacity, and the data usage information is used to indicate a usage situation of each usage node in a data usage end to the data;

a processing module 702, configured to transfer, according to the capacity information and the data usage information, data that meets a preset transfer condition from the first cache region to a historical data storage region.

In a possible design, the preset transfer condition includes a first transfer condition, where the first transfer condition is used to meet a data production efficiency requirement of a data generation end, and correspondingly, the processing module 702 is specifically configured to:

determining remaining capacity information according to the capacity information, the remaining capacity information including: a remaining capacity, and/or a rate of change of the remaining capacity;

and if the residual capacity is less than or equal to a preset capacity threshold value and/or the change rate is greater than or equal to a preset change threshold value, transferring the data to be transferred from the first cache region to the historical data storage region according to the data use information and a preset period.

In a possible design, the preset transition condition includes a second transition condition, where the second transition condition is used to satisfy a data usage requirement of a high-speed data usage node, and a data usage rate of the high-speed data usage node is greater than or equal to a preset rate, and correspondingly, the processing module 702 is specifically configured to:

determining a proportion of used data based on the data usage information, the used data being data used by at least one of the nodes;

and when the proportion is larger than or equal to a preset proportion, if the using times of the used data are larger than or equal to the preset using times, transferring the data to be transferred from the first cache region to the historical data storage region.

Optionally, the data to be transferred includes used data, and the used data is data used by at least one node.

Optionally, the data to be transferred includes data in all the first buffer areas.

In a possible design, the preset transition conditions include a third transition condition, where the third transition condition is used to clean the first cache area in time after the data is used by all the using nodes, and correspondingly, the processing module 702 is specifically configured to:

and if all the data in the first cache region are used by all the data using nodes, transferring all the data to the historical data storage region.

In a possible design, the obtaining module 701 is further configured to obtain a historical data usage request of a data usage node in a data usage end;

the processing module 702 is further configured to determine target historical data in the historical data storage area according to the historical data usage request;

the processing module 702 is further configured to send the target history data to the data usage node.

In one possible design, the processing module 702 is further configured to:

and if the residual capacity of the first cache region meets the storage requirement of the target historical data, storing the target historical data into the first cache region for the data using node to use.

Optionally, the processing module 702 is further configured to send the target history data to the data using node, and specifically includes:

and if the residual capacity of the first cache region does not meet the storage requirement of the target historical data, storing the target historical data in a second cache region for the data using node to use.

In a possible design, the processing module 702 is further configured to send the target history data to the data usage node, and specifically includes:

and storing the target historical data to a second cache region for the data using node to use.

It should be noted that the data processing apparatus provided in the embodiment shown in fig. 7 can execute the method provided in any of the above method embodiments, and the specific implementation principle, technical features, term interpretation and technical effects thereof are similar and will not be described herein again.

Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 8, the electronic device 800 may include: at least one processor 801 and a memory 802. Fig. 8 shows an electronic device as an example of a processor.

The memory 802 stores programs. In particular, the program may include program code including computer operating instructions.

Memory 802 may comprise high-speed RAM memory and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.

The processor 801 is configured to execute computer-executable instructions stored in the memory 802 to implement the methods described in the method embodiments above.

The processor 801 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement the embodiments of the present application.

Alternatively, the memory 802 may be separate or integrated with the processor 801. When the memory 802 is a device independent of the processor 801, the electronic device 800 may further include:

a bus 803 for connecting the processor 801 and the memory 802. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. Buses may be classified as address buses, data buses, control buses, etc., but do not represent only one bus or type of bus.

Alternatively, in a specific implementation, if the memory 802 and the processor 801 are integrated into a chip, the memory 802 and the processor 801 may communicate through an internal interface.

An embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium may include: various media that can store program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and in particular, the computer-readable storage medium stores program instructions for the methods in the above method embodiments.

An embodiment of the present application further provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the computer program implements the method in the foregoing method embodiments.

An embodiment of the present application further provides a data processing system, including: the data caching method comprises a data generating end, a data caching end and a data using end; wherein the content of the first and second substances,

the data generating end is used for generating data to be used and storing the data to be used into the data caching end;

the data cache terminal comprises a first cache region and a historical data storage region, wherein the historical data storage region is used for storing data which meets a preset transfer condition in the first cache region, and optionally, the data cache terminal further comprises a second cache region, and the second cache region is used for storing target historical data from the historical data storage region to the second cache region for the data using node to use when the data using node requests to use the historical data;

the data using end comprises at least one data using node, and the data using node is used for calling data in the first cache region or the historical data storage region;

the data generating end or the data caching end is further configured to implement any one of the possible data processing methods provided by the above method embodiments.

Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.

It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于区复位行为的数据存储设备的区分配

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类