Method, device and storage medium for block caching of data

文档序号:735113 发布日期:2021-04-20 浏览:10次 中文

阅读说明:本技术 一种分块缓存数据的方法、装置及存储介质 (Method, device and storage medium for block caching of data ) 是由 李栋 于 2020-12-21 设计创作,主要内容包括:本发明公开了一种分块缓存数据的方法、装置及存储介质。该方法包括:在接收计算任务的缓存数据请求之后,先确定所述计算任务所需要的目标数据及所述目标数据所包括的多个数据分块;之后,每次在缓存中仅缓存其中的一个数据分块,在确认相应数据分块对应的计算单元得以执行并得到计算结果后,再缓存下一个数据分块,直至所述多个数据分块中的每一个数据分块都被缓存过。在分块缓存数据的上述过程中,每次仅缓存多个数据分块中的一个数据分块,对缓存数据所需的存储空间要求较低,可最大程度地利用现有的存储空间,降低硬件成本,减少存储空间不够用的情况。(The invention discloses a method, a device and a storage medium for blocking and caching data. The method comprises the following steps: after a cache data request of a computing task is received, determining target data required by the computing task and a plurality of data blocks included in the target data; and then, only one data block is cached in the cache each time, and after the calculation unit corresponding to the corresponding data block is executed and a calculation result is obtained, the next data block is cached until each data block in the plurality of data blocks is cached. In the process of caching data in blocks, only one data block in a plurality of data blocks is cached each time, the requirement on the storage space required by the cached data is low, the existing storage space can be utilized to the maximum extent, the hardware cost is reduced, and the situation that the storage space is insufficient is reduced.)

1. A method of block caching data, the method comprising:

receiving a cache data request of a computing task, wherein the computing task comprises a plurality of computing units which can be executed concurrently;

determining target data required by the computing task and a plurality of data blocks included by the target data, wherein each data block in the plurality of data blocks is used for computing of at least one computing unit in the plurality of computing units;

caching one data block in the plurality of data blocks, and caching the next data block after confirming that the computing unit corresponding to the corresponding data block is executed and obtaining the computing result until each data block in the plurality of data blocks is cached.

2. The method of claim 1, prior to determining target data required for the computing task and a plurality of data chunks included with the target data, the method further comprising:

configuring the number of data blocks included in the target data;

and dividing the target data into a plurality of mutually independent data blocks according to the number of the data blocks.

3. The method of claim 1, prior to said caching one of the plurality of data chunks, the method further comprising:

and judging whether all the data blocks of the target data can be cached or not, and if not, continuing the following operation.

4. The method of claim 1, prior to said caching one of the plurality of data chunks, the method further comprising:

acquiring an identifier of each data block in the plurality of data blocks and sequencing all the identifiers of the plurality of data blocks to obtain an ordered queue;

accordingly, caching one of the plurality of data chunks includes:

and taking out the identifier of one data block from the ordered queue, and caching the data block corresponding to the corresponding identifier.

5. The method of claim 1, wherein the confirming that the computing unit corresponding to the corresponding data block is executed and obtains a computing result comprises:

in the process of executing the computing units concurrently, acquiring the reference number of the corresponding data block, wherein the reference number is the number of times that the corresponding data block is used by all the computing units;

and when the reference number of the corresponding data block is 0, confirming that the calculation unit corresponding to the corresponding data block is executed and obtaining a calculation result.

6. The method of claim 5, prior to the obtaining the reference number of the respective data chunk, the method further comprising:

analyzing each computing unit of the computing task to obtain the number of times of use of each computing unit on the target data;

accumulating the use times of each calculation unit to the corresponding data blocks to obtain the reference number of the target data;

acquiring an identifier of each data block in a plurality of data blocks included in the target data;

and recording the reference number of the target data as the reference number of the corresponding data block through the identification of each data block.

7. The method of claim 5, further comprising, during concurrent execution of the computing units:

and if the computing unit uses the cache to obtain the corresponding data block, subtracting 1 from the reference number of the corresponding data block.

8. The method of claim 1, prior to said caching a next data chunk, the method further comprising:

the corresponding data block is purged from the cache.

9. An apparatus for block caching data, the apparatus comprising:

the cache data request receiving module is used for receiving a cache data request of a computing task, wherein the computing task comprises a plurality of computing units which can be executed concurrently;

a target data and data block determination module, configured to determine target data required by the computing task and a plurality of data blocks included in the target data, where each data block of the plurality of data blocks is used for computing by at least one computing unit of the plurality of computing units;

and the data block cache module is used for caching one data block in the plurality of data blocks, clearing the current data block and caching the next data block after confirming that the calculation unit corresponding to the corresponding data block is executed and obtaining the calculation result until each data block in the plurality of data blocks is cached.

10. A computer storage medium having stored thereon program instructions for performing, when running, a method of caching data in blocks according to any one of claims 1 to 8.

Technical Field

The present invention relates to the field of data processing, and in particular, to a method, an apparatus, and a storage medium for block caching data.

Background

As is well known, data caching is an effective way to increase data access speed, and is widely applied to various large data processing systems. In recent years, with the increasing development and popularization of network communication and computer technology, the application of large data is more and more extensive, which also puts higher requirements on the cache space of the data, especially for a large data processing system provided with a plurality of cache levels.

The existing big data cache mechanism usually adopts a full data loading mode, so that cache operation cannot be completed because of too large data volume and insufficient memory space. In addition, even if the data size is not large, the cache operation may not be completed frequently when the memory space or the disk space is occupied by a large number of critical processes and is insufficient by using a full data loading method.

In view of the above problems, if the solution is simply achieved by adding storage space, the cost of hardware construction and maintenance is inevitably increased. Moreover, for some systems without scalability and without increasing storage space, this approach may mean rebuilding, thereby causing a great waste of resources

Therefore, how to reduce the situation of insufficient storage space by improving the data caching method to more fully utilize the cache space without increasing the storage space remains a still-pending technical problem.

Disclosure of Invention

In view of the above problems, the present inventors have innovatively provided a method, apparatus, and storage medium for block caching data.

According to a first aspect of the embodiments of the present invention, a method for block caching data includes: receiving a cache data request of a computing task, wherein the computing task comprises a plurality of computing units which can be executed concurrently; determining target data required by a computing task and a plurality of data blocks included by the target data, wherein each data block in the plurality of data blocks is used for computing of at least one computing unit in a plurality of computing units; caching one data block in the plurality of data blocks, caching the next data block after confirming that the computing unit corresponding to the corresponding data block is executed and obtaining the computing result until each data block in the plurality of data blocks is cached.

According to an embodiment of the present invention, before determining target data required by a computing task and a plurality of data blocks included in the target data, the method further includes: configuring the number of data blocks included in target data; and dividing the target data into a plurality of data blocks which are independent of each other according to the number of the data blocks.

According to an embodiment of the present invention, before caching one of the plurality of data chunks, the method further includes: and judging whether all the data blocks of the target data can be cached or not, and if not, continuing the following operation.

According to an embodiment of the present invention, before caching one of the plurality of data chunks, the method further includes: acquiring an identifier of each data block in a plurality of data blocks and sequencing all the identifiers of the data blocks to obtain an ordered queue; accordingly, caching one of the plurality of data chunks includes: and taking out the identifier of one data block from the ordered queue, and caching the data block corresponding to the corresponding identifier.

According to an embodiment of the present invention, determining that a computing unit corresponding to a corresponding data block is executed and obtains a computing result includes: in the process of executing the computing units concurrently, acquiring the reference number of the corresponding data block, wherein the reference number is the number of times that the corresponding data block is used by all the computing units; and when the reference number of the corresponding data block is 0, confirming that the calculation unit corresponding to the corresponding data block is executed and obtaining a calculation result.

According to an embodiment of the present invention, before obtaining the reference number of the corresponding data partition, the method further includes: analyzing each computing unit of the computing task to obtain the number of times of using the target data by each computing unit; accumulating the use times of each calculation unit on the corresponding data blocks to obtain the reference number of the target data; acquiring an identifier of each data block in a plurality of data blocks included in target data; and recording the reference number of the target data as the reference number of the corresponding data block through the identification of each data block.

According to an embodiment of the present invention, in a process of concurrently executing computing units, the method further includes: if the computing unit uses the cache to obtain the corresponding data block, the reference number of the corresponding data block is reduced by 1.

According to an embodiment of the present invention, before caching the next data block, the method further includes: the corresponding data block is purged from the cache.

According to a second aspect of the embodiments of the present invention, an apparatus for block-caching data, the apparatus includes: the cache data request receiving module is used for receiving a cache data request of a computing task, wherein the computing task comprises a plurality of computing units which can be executed concurrently; the target data and data block determining module is used for determining target data required by a computing task and a plurality of data blocks included by the target data, wherein each data block in the plurality of data blocks is used for computing at least one computing unit in the plurality of computing units; and the data block cache module is used for caching one data block in the multiple data blocks, clearing the current data block and caching the next data block after confirming that the calculation unit corresponding to the corresponding data block is executed and obtaining a calculation result until each data block in the multiple data blocks is cached.

According to a third aspect of embodiments of the present invention, there is provided a computer storage medium comprising a set of computer executable instructions which, when executed, perform any of the above methods of block caching data.

The embodiment of the invention provides a method, a device and a storage medium for caching data in a blocking way, wherein the method comprises the following steps: after a cache data request of a computing task is received, determining target data required by the computing task and a plurality of data blocks included in the target data; and then, only one data block is cached in the cache each time, and after the calculation unit corresponding to the corresponding data block is executed and a calculation result is obtained, the next data block is cached until each data block in the plurality of data blocks is cached. At this time, all the computing units of the computing task are executed, and the subsequent computing process can be continued to obtain the final result of the computing task. In the process of caching data in blocks, only one data block in a plurality of data blocks is cached each time, the requirement on the storage space required by the cached data is low, the existing storage space can be utilized to the maximum extent, the hardware cost is reduced, and the situation that the storage space is insufficient is reduced.

Drawings

The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:

in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.

FIG. 1 is a schematic diagram of a flow chart of an implementation of a block caching data method according to an embodiment of the present invention;

FIG. 2 is a flowchart illustrating a specific implementation of an application of a block caching method according to an embodiment of the present invention;

fig. 3 is a schematic structural diagram of a block cache data device according to an embodiment of the present invention.

Detailed Description

In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.

Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.

Fig. 1 shows a flow of implementing the method for caching data in blocks according to the embodiment of the present invention. Referring to fig. 1, the method includes: an operation 110 of receiving a cache data request for a computing task, wherein the computing task includes a plurality of concurrently executable computing units; operation 120, determining target data required by the computing task and a plurality of data blocks included in the target data, where each data block in the plurality of data blocks is used for computing of at least one computing unit in the plurality of computing units; in operation 130, one of the data blocks is cached, and after it is determined that the calculation unit corresponding to the corresponding data block is executed and the calculation result is obtained, the next data block is cached until each of the data blocks is cached.

It should be noted that the main purpose of the data caching is to enable the computing unit to obtain the data required by the computation more quickly, and therefore, the method for caching data in blocks according to the embodiment of the present invention is generally performed in cooperation with a scheduling execution process of a computation task. Generally, task scheduling execution and data caching operations can be performed through different threads, and a main control program is arranged for performing cooperative propulsion; or by means of a master thread and a slave thread. The implementer may opt for any suitable implementation as desired.

In operation 110, the request for the cached data may be only a trigger procedure or a command for initializing the cache management tool, and the cache management program can prepare various resources for the next step of caching the data, such as obtaining the corresponding storage space.

The computing unit mainly refers to the smallest computing unit that can be executed, for example, a certain function, a subtask, or a certain operation.

In operation 120, the target data is usually large and thus divided into independent data blocks according to the need. The mutual independence mainly means that the data blocks have no coupling relation, and each data block at least comprises all data required by one calculation of the corresponding calculation unit. For example, one table stores 1 ten thousand records, and one record is required to be used in each calculation, each record is the minimum unit of the partitionable block, if a division with the number of the partitionable blocks being 10 is predefined, the data can be divided into 10 data partitions, each data partition can include a certain number (for example, thousands to hundreds) of records, and the data in each data partition is not repeated and is exactly 1 ten thousand records as it is. Data blocks required by the computing task can be informed by the computing task; or a corresponding relation between a calculation task and data can be maintained in advance, and the data block identifier corresponding to the task is obtained through the task identifier during running; and in any other feasible manner.

In operation 130, it is necessary to ensure that all the computing units that need to use the data block have obtained the corresponding data, otherwise, if any computing unit fails to receive the data and cannot complete the computation, the whole computing task may fail to continue the following computation. When the computing units corresponding to the corresponding data blocks are confirmed to be executed and computing results are obtained, the block data to be used by each computing unit and the execution states of the computing units can be detected one by one, but the mode increases corresponding operations and occupies a part of computing resources; the reference number of the data block can be obtained in advance, and the cache of the data and the execution of calculation can be promoted by the reference number in the execution; but also in any other feasible way.

According to an embodiment of the present invention, before determining target data required by a computing task and a plurality of data blocks included in the target data, the method further includes: configuring the number of data blocks included in target data; and dividing the target data into a plurality of data blocks which are independent of each other according to the number of the data blocks.

In this embodiment, an implementer may configure the number of data partitions according to the size of the storage space and other related requirements, and divide the target data into a plurality of data partitions independent of each other according to the number of data partitions. Therefore, the size of the block can be flexibly adjusted according to the implementation condition, so that the utilization rate of the cache is higher.

According to an embodiment of the present invention, before caching one of the plurality of data chunks, the method further includes: and judging whether all the data blocks of the target data can be cached or not, and if not, continuing the following operation.

If the storage space available for cache allocation and use is large enough, all data blocks can be put down, and the operation of replacing the data blocks can be reduced and the efficiency is higher by using the traditional cache method, namely caching the whole target data. Therefore, in the present embodiment, a determination is made first. Therefore, the optimal method for caching data can be selected preferentially according to different operating conditions.

According to an embodiment of the present invention, before caching one of the plurality of data chunks, the method further includes: acquiring an identifier of each data block in a plurality of data blocks and sequencing all the identifiers of the data blocks to obtain an ordered queue; accordingly, caching one of the plurality of data chunks includes: and taking out the identifier of one data block from the ordered queue, and caching the data block corresponding to the corresponding identifier.

In this embodiment, by sorting the identifiers of the data chunks, the data chunks can be cached in order, and it is ensured that each data chunk is cached to avoid missing. Meanwhile, the sequencing mode can be used as a means cooperating with the computing task, namely the same sequencing can be adopted in another thread of the scheduling execution of the computing task to read the cache data.

According to an embodiment of the present invention, determining that a computing unit corresponding to a corresponding data block is executed and obtains a computing result includes: in the process of executing the computing units concurrently, acquiring the reference number of the corresponding data block, wherein the reference number is the number of times that the corresponding data block is used by all the computing units; and when the reference number of the corresponding data block is 0, confirming that the calculation unit corresponding to the corresponding data block is executed and obtaining a calculation result.

In this embodiment, the reference number of the data block is obtained in advance, and the calculation process and the data caching process are coordinated by the reference number during operation.

According to an embodiment of the present invention, before obtaining the reference number of the corresponding data partition, the method further includes: analyzing each computing unit of the computing task to obtain the number of times of using the target data by each computing unit; accumulating the use times of each calculation unit on the corresponding data blocks to obtain the reference number of the target data; acquiring an identifier of each data block in a plurality of data blocks included in target data; and recording the reference number of the target data as the reference number of the corresponding data block through the identification of each data block.

In the present embodiment, the total number of references to the target data, that is, the number of references to each data block included in the target data, is obtained by analyzing the number of references to the target data by each calculation unit and accumulating the numbers. This analysis is usually obtained by static analysis before running, by calling relationships between code and data. An implementer can utilize the existing code analysis tool to obtain the calling relation between the code and the data, and on the basis, the reference number of each computing unit to the target data is counted.

According to an embodiment of the present invention, in a process of concurrently executing computing units, the method further includes: if the computing unit uses the cache to obtain the corresponding data block, the reference number of the corresponding data block is reduced by 1.

In the process of controlling or scheduling the concurrent execution of the computing units, some processing logic may be added after the computing process is completed, which may include the operation of subtracting 1 from the reference number.

According to an embodiment of the present invention, before caching the next data block, the method further includes: the corresponding data block is purged from the cache.

When the reference number of the data block is 0, or after it is confirmed that the computing unit corresponding to the corresponding data block is executed and obtains the computing result, it can basically be determined that the current computing task no longer needs the data block in the cache. In this embodiment, the corresponding data partition is cleared, so that more storage space can be made for the next data partition, and the utilization rate of the cache is further improved.

Fig. 2 is a flowchart illustrating a specific implementation of an application of the block caching data method according to an embodiment of the present invention. The application realizes the block cache of the data blocks of the target data by using a cache management tool in a Spark platform and combining the scheduling management of the calculation tasks.

In Spark, an elastic distributed data set (RDD) may be defined, and the data set may be divided into a plurality of data blocks, where the number of data blocks may be predefined according to the number of data files, Spark default parallelism, or computational output. In addition, Spark also provides a data Cache management tool (Spark Cache), which is very suitable for implementing the method for caching data in blocks provided by the embodiment of the invention.

As shown in fig. 2, the specific steps of this process include:

step 2010, analyzing data reference counts from a Directed Acyclic Graph (DAG);

in Spark, the relation of the RDD is modeled by using DAG, and the dependency relation of the RDD is described, so that DAG can be analyzed to obtain the reference count of each data block data.

The reference count may exist in a temporary variable and be read or updated by the host program by means of a parameter, and an operation on the temporary variable may be defined in a corresponding operation after the host program schedules a logical decision of a concurrent task.

Step 2020, starting computation, executing a plurality of concurrent computations (tasks) according to the DAG graph, and simultaneously starting another cache management thread to manage and operate cache data;

in step 2030, after the cache management thread receives the cache request, all data blocks corresponding to the computing task are determined.

2040, sorting the Ids of the data blocks to obtain an ordered queue;

step 2050, take Id of a data block from the ordered queue (first, take value of the first element of the queue, then take value of the next element in the queue in sequence), and buffer the corresponding data block;

step 2060, in the main thread responsible for task scheduling and executing multiple concurrent computations, continuing the computation flow according to the DAG, including reading the data in the cache;

step 2070, detecting whether the data in the cache has cached the data block (for the first time) or has updated the data block, if yes, continuing step 2080, and if not, waiting;

step 2080, completing corresponding calculation by using the data blocks in the cache, wherein if any calculation uses the data blocks in the cache, subtracting 1 from the reference number;

for each calculation performed concurrently, the id of the next data chunk to be used is determined and compared with the id of the data chunk cached in the cache: if the data are the same, taking out the data for calculation; and if the data blocks are different, the data blocks are blocked and are awakened after new data blocks exist.

Step 2090, at the same time, in the cache management thread, continuously detecting whether the reference number of the data block is 0, if yes, continuing to step 2100, and if not, waiting;

step 2100, clearing the cached data blocks;

step 2110, determining whether there are more data blocks that have not been cached (whether there is the last element in the ordered queue), if yes (not the last element), obtaining the next data block, returning to step 2050, and if not (there is the last element), ending the cache management thread.

And step 2120, in the main thread, after completing the corresponding calculation using the cached data block, detecting whether there is any calculation that is not completed, if yes, returning to step 2060, continuing to read the data in the cache, and if not, ending the calculation task and returning to the calculation result.

It should be noted that a specific implementation flow of an application in the foregoing embodiment is only an exemplary illustration, and concurrently defines an implementation manner or an application scenario of the embodiment of the present invention. The implementer can adopt any applicable implementation mode to be applied to any applicable application scene according to specific implementation conditions.

Further, an embodiment of the present invention further provides an apparatus for block caching data, as shown in fig. 3, where the apparatus 30 includes: a cache data request receiving module 301, configured to receive a cache data request of a computing task, where the computing task includes multiple computing units that can be executed concurrently; a target data and data block determining module 302, configured to determine target data required by a computing task and a plurality of data blocks included in the target data, where each data block in the plurality of data blocks is used for computing by at least one computing unit in the plurality of computing units; the data block caching module 303 is configured to cache one data block of the multiple data blocks, clear the current data block after determining that the computing unit corresponding to the corresponding data block is executed and obtains a computing result, and cache the next data block until each data block of the multiple data blocks is cached.

According to an embodiment of the present invention, the apparatus 30 further includes: the data block number configuration module is used for configuring the number of data blocks included in the target data; and the data block dividing module is used for dividing the target data into a plurality of mutually independent data blocks according to the number of the data blocks.

According to an embodiment of the present invention, the apparatus 30 further includes a data caching load determining module, configured to determine whether all data blocks of the target data can be cached, and if not, continue the following operations.

According to an embodiment of the present invention, the apparatus 30 further includes a data block sorting module, configured to obtain an identifier of each data block of the multiple data blocks and sort all identifiers of the multiple data blocks to obtain an ordered queue; correspondingly, the data block cache module is specifically configured to take out an identifier of one data block from the ordered queue, and cache the data block corresponding to the corresponding identifier.

According to an embodiment of the present invention, the data block caching module 303 includes: the reference number acquisition submodule is used for acquiring the reference number of the corresponding data block in the process of executing the computing units concurrently, wherein the reference number is the number of times that the corresponding data block is used by all the computing units; and the computing unit execution state judgment sub-module is used for confirming that the computing unit corresponding to the corresponding data block is executed and obtaining a computing result when the reference number of the corresponding data block is 0.

According to an embodiment of the present invention, the apparatus 30 further includes a data partitioning reference number statistics module, configured to analyze each computing unit of the computing task to obtain the number of times of using the target data by each computing unit; accumulating the use times of each calculation unit on the corresponding data blocks to obtain the reference number of the target data; acquiring an identifier of each data block in a plurality of data blocks included in target data; and recording the reference number of the target data as the reference number of the corresponding data block through the identification of each data block.

According to an embodiment of the present invention, the apparatus 30 further includes a reference number updating module, configured to subtract 1 from the reference number of the corresponding data chunk if the corresponding data chunk is cached by the computing unit.

According to an embodiment of the present invention, the apparatus 30 further includes a data clearing module, configured to clear the corresponding data partition from the cache.

According to a third aspect of embodiments of the present invention, there is provided a computer storage medium comprising a set of computer executable instructions which, when executed, perform any of the above methods of block caching data.

Here, it should be noted that: the above description of the embodiment of the apparatus for block caching data and the above description of the embodiment of the computer storage medium are similar to the description of the foregoing method embodiments, and have similar beneficial effects to the foregoing method embodiments, and therefore are not repeated herein. For the technical details that have not been disclosed yet in the description of the embodiments of the apparatus for block caching data and the description of the embodiments of the computer storage medium of the present invention, please refer to the description of the foregoing method embodiments of the present invention for understanding, and therefore, for brevity, will not be described again.

It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of a unit is only one logical function division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another device, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.

Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media capable of storing program codes, such as a removable storage medium, a Read Only Memory (ROM), a magnetic disk, and an optical disk.

Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods of the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage medium, a ROM, a magnetic disk, an optical disk, or the like, which can store the program code.

The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:数据存储系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类