Method for recording object storage bucket statistics and counting

文档序号:1963439 发布日期:2021-12-14 浏览:18次 中文

阅读说明:本技术 一种记录对象存储桶统计计数的方法 (Method for recording object storage bucket statistics and counting ) 是由 李恩泽 文刘飞 陈坚 于 2021-11-10 设计创作,主要内容包括:本发明公开了记录对象存储桶统计计数的方法,属于数据存储领域,包括:对象存储网关RGW完成写操作后,对目标对象所属存储桶统计计数的键的值同步更新;将目标存储桶统计计数的键拆分为若干子键,将整个键空间分成指定片数的子空间;任务线程从其缓存中读取目标键的值,修改后重新储存并持久化到该目标键的子键所属的TIKV中;RGW接受读取目标存储桶统计计数的请求后,结合请求对应的存储桶,分拆成对多个子键的处理请求并下发到后端的TIKV;S5、TIKV对目标存储桶的统计计数的子键进行遍历,并在TIKV节点中进行归并,再将归并结果返回RGW进行二次归并。避免高并发场景下由于请求排队而造成的不必要的等待耗时,以及对同一Key修改而造成的写冲突和写热点问题。(The invention discloses a method for recording the statistical count of an object storage bucket, which belongs to the field of data storage and comprises the following steps: after the object storage gateway RGW finishes the write operation, synchronously updating the value of the key for counting the storage bucket to which the target object belongs; dividing a key for counting the target storage bucket into a plurality of sub-keys, and dividing the whole key space into subspaces with appointed number of pieces; the task thread reads the value of the target key from the cache, and stores and persists the value to the TIKV to which the sub-key of the target key belongs after modification; after receiving the request for reading the statistic count of the target bucket, the RGW splits the processing request for the plurality of sub-keys and sends the processing request to the TIKV at the rear end by combining the buckets corresponding to the request; and S5, traversing the sub-keys of the statistical count of the target storage bucket by TIKV, merging in the TIKV node, and returning the merging result to RGW for secondary merging. Unnecessary waiting time consumption caused by request queuing under a high-concurrency scene and the problems of writing conflict and hot spot caused by modifying the same Key are avoided.)

1. A method of recording a statistical count of object buckets, comprising the steps of:

s1, after completing the write operation, the object storage gateway synchronously updates the value of the key of the statistical count of the storage bucket of the target object;

s2, dividing the key of the statistical counting of the target bucket into a plurality of sub-keys, and dividing the whole key space into sub-spaces with appointed number of pieces;

s3, the task thread reads the value of the target key from the cache, restores the value after modification, and persists the value to the distributed transaction key value database to which the sub-key of the target key belongs;

s4, after receiving the request for reading the statistic count of the target storage bucket, the object storage gateway splits the processing request for the plurality of sub-keys and sends the processing request to a distributed transaction key value database at the rear end by combining the storage bucket corresponding to the request;

and S5, traversing the sub-keys of the statistical count of the target storage bucket by the distributed transaction key value database, merging the sub-keys in the nodes of the distributed transaction key value database, and returning the merged result to the object storage gateway for secondary merging.

2. The method of claim 1, wherein in step S1, each thread of the object storage gateway generates a key for the target bucket statistics count based on the bucket to which the object belongs and the self thread ID after completing the self write request.

3. The method of recording object bucket statistics counts of claim 2, wherein when a client issues a write request to the object storage gateway, the object storage gateway dispatches the write request to multiple threads for processing.

4. The method for recording the statistical count of the object buckets according to claim 1, wherein after the step S2 is performed to split the statistically counted keys, when performing a write operation, a thread performing the write operation firstly splices the keys to be modified according to a pre-agreed format.

5. The method of recording object bucket statistics counts of claim 4, wherein the thread performing the write operation concatenates the keys that need to be modified according to a pre-agreed format, comprising:

the thread executing the write operation splices the sub-keys of the keys needing to be modified according to the format of < (UUID% shared _ num) + Bucket ID + separate char + UUID >; wherein, (UUID% shrard _ num) is formed by performing mould-taking and splicing on the UUID according to the number of fragments shrard _ num; bucket ID represents Bucket ID; the separate char is a separator; the UUID is a universally unique identifier generated from the object store gateway ID and the thread ID.

6. The method of claim 1, wherein when the key of the statistical count is split in step S2, each object storage gateway is assigned a unique ID, and each object storage gateway has a thread with an ID independent in the object storage gateway; when the statistical count is updated, each thread independently completes the updating of the key of the thread.

7. The method of recording a statistical count for an object bucket as claimed in claim 1, wherein when the statistically counted key is split in step S2, the sub-key is structured such that: firstly, the ID of the target storage bucket is followed by a separator for distinguishing from other sub-keys, and then a universal unique identification code is generated by using the ID of the object storage gateway and the ID of the thread and spliced at the tail end.

8. The method of recording object bucket statistical counts of claim 7, wherein the sharded child keys are distributed across different distributed transaction key value database nodes.

9. The method of claim 1, wherein in step S5, when performing a read operation, the sub-keys of the statistical count for the target Bucket in the distributed transaction key-value database are traversed by < shard number + Bucket ID > and merged.

10. The method of recording object bucket statistical counts of claim 9, wherein the step S5 of performing a read operation includes:

when a client sends a read request for a target storage bucket to an object storage gateway, the object storage gateway forwards the read request to each distributed transaction key value database node at the back end;

traversing the sub-key prefixes counted by each node according to the statistics of the target storage Bucket, wherein the sub-key prefixes are partitioned object storage gateway IDs and thread IDs, and then splicing the partitioned object storage gateway IDs and the thread IDs, the results are not returned at the moment, but pushed down to a distributed transaction key value database for co-processing, the traversal results are merged in the distributed transaction key value database, and different sub-key suffixes < object storage gateway IDs + thread IDs >;

and each distributed transaction key value database node returns the merging result to the object storage gateway, and secondary merging is carried out in the object storage gateway.

Technical Field

The invention relates to the technical field of data storage, in particular to a method for recording the statistical count of an object storage bucket.

Background

Object storage, i.e., object-based storage, is a generic term used to describe methods of resolving and processing discrete units, referred to as objects. Like a file, an object contains data, but unlike a file, an object no longer has a hierarchy in one hierarchy. Each object is in the same level of a flat address space called a storage pool, and an object does not belong to the next level of another object.

Both files and objects have metadata related to the data they contain, but objects are characterized by extended metadata. Each object is assigned a unique identifier, allowing a server or end-user to retrieve the object without knowing the physical address of the data. This approach is helpful for automating and simplifying data storage in a cloud computing environment. Object storage is often compared with parking in a high-grade restaurant, and when a customer needs to park in a hotel, the customer gives a key to others and changes a receipt; the customer does not have to know where his car is parked, nor how many times the attendant will move his car while he is eating. In this metaphor, a unique identifier for a stored object represents the customer's receipt.

An object storage gateway (RGW) is a service provided by object storage that enables clients to access object storage clusters using standard object storage APIs, while the object storage gateway accesses the underlying storage engine through an internal interface.

The object storage gateway generally provides a standard restful interface for web services, and can perform user right control, protocol conversion, data analysis and processing and the like in an object gateway layer, so that the use scene of object storage is greatly enriched.

The container used to store the storage objects (objects) is called a storage space or Bucket (Bucket), which may be referred to simply as a "Bucket," and each storage Object must be subordinate to a Bucket. Different buckets have different configuration attributes including region, access rights, storage type, etc. Users can create buckets of different attributes based on the requirements of the storage objects in actual production.

The statistical count of the bucket belongs to the metadata information of the bucket, and is mainly used for storing the object statistical information of the bucket, including the total number of objects, the total size of the objects and the like, and each bucket has a unique statistical count. In distributed storage Ceph, the statistical count is stored in Omap (single-machine Key-Value storage system), and a Key (Key) named rgw _ bucket _ dir _ header is corresponded to a Key/Value structure.

Each bucket has a unique statistical count, and in a high concurrency scene, if a method for synchronously updating the statistical count is adopted, the phenomenon of blocking I/O (input/output) can occur due to queuing of requests in an OSD (object storage engine), so that the mode of recording the bucket statistical count by many manufacturers at present is asynchronous. When the updating operation of the relevant statistical counting is carried out, the content needing to be updated is changed into a message to be sent to a third-party component, the message is consumed by a background, and the statistical counting is asynchronously updated. The asynchronous statistical scheme depends on a third-party component, and has the advantages that the write operation I/O flow is not blocked due to updating of statistical information, the performance is less affected, and a conflict scene can be avoided when a plurality of clients or a plurality of threads operate the same bucket simultaneously; however, the disadvantages are also apparent, and are shown in the following two aspects:

1) the maintenance cost is increased by depending on a third-party message queue, and when the message is asynchronously sent to the queue, the message needs to be persisted to prevent the message loss caused by power failure from causing that the statistical count cannot be correctly updated;

2) due to the adoption of the asynchronous updating scheme, after the operation on the storage object is completed, a certain delay exists for the updating of the statistical count, so that when the operation of the asynchronous updating is not completed, the reading of the statistical information is inaccurate.

Another scheme for recording the bucket statistics count is synchronous statistics, in which, when the object storage gateway receives a write request every time and completes Put/Delete operations on a storage object in a bucket, the object storage gateway synchronously updates the bucket statistics count, and the update of the statistics count also belongs to a part of the I/O flow of the current modification operation. The scheme of synchronizing statistics may increase the time consumption of write requests, but may not be inaccurate for reading the statistics due to the synchronization operation. However, in a high concurrency scenario, when multiple threads in multiple clients operate on a storage object in the same bucket, all threads update the statistical count of the bucket, and due to the uniqueness of the statistical count, the request is queued in OSD and executed serially in Ceph, which may result in performance degradation.

Disclosure of Invention

In the scheme of synchronously updating the statistical count, due to the self-structure of the distributed storage Ceph, only one thread updates the statistical count of the target storage bucket at the same time in a high concurrency scene, so that the problem of I/O blockage caused by request queuing exists, and extra queuing waiting time is added in the request processing flow. In view of this, the present invention provides a method for recording statistics and counts of object buckets, and aims to solve unnecessary waiting time consumption caused by request queuing in a high concurrency scenario by replacing a backend storage component on the basis of a synchronous statistics flow, and avoid a write collision problem and a write hot spot problem caused by modifying the same Key in the high concurrency scenario.

A method of recording a statistical count of object buckets, comprising the steps of: s1, after completing the write operation, the object storage gateway synchronously updates the value of the key of the statistical count of the storage bucket of the target object; s2, dividing the key of the statistical counting of the target bucket into a plurality of sub-keys, and dividing the whole key space into sub-spaces with appointed number of pieces; s3, the task thread reads the value of the target key from the cache, restores the value after modification, and persists the value to the distributed transaction key value database to which the sub-key of the target key belongs; s4, after receiving the request for reading the statistic count of the target storage bucket, the object storage gateway splits the processing request for the plurality of sub-keys and sends the processing request to a distributed transaction key value database at the rear end by combining the storage bucket corresponding to the request; and S5, traversing the sub-keys of the statistical count of the target storage bucket by the distributed transaction key value database, merging the sub-keys in the nodes of the distributed transaction key value database, and returning the merged result to the object storage gateway for secondary merging.

Further, in step S1, after completing the self-write request, each thread of the object storage gateway generates a key of the target bucket statistics count according to the bucket to which the object belongs and the self-thread ID.

Further, when the client sends the write request to the object storage gateway, the object storage gateway sends the write request to a plurality of threads for processing.

Further, after the statistically counted keys are split in step S2, when a write operation is performed, the thread performing the write operation first splices the keys to be modified according to the pre-agreed format.

Further, the thread executing the write operation splices the keys to be modified according to a pre-agreed format, including: the thread executing the write operation splices the sub-keys of the keys needing to be modified according to the format of < (UUID% shared _ num) + Bucket ID + separate char + UUID >; wherein, (UUID% shrard _ num) is formed by performing mould-taking and splicing on the UUID according to the number of fragments shrard _ num; bucket ID represents Bucket ID; the separate char is a separator; the UUID is a universally unique identifier generated from the object store gateway ID and the thread ID.

Further, when the key of the statistical count is split in step S2, each object storage gateway is assigned with a unique ID, and the thread of each object storage gateway also has an independent ID in the object storage gateway to which it belongs; when the statistical count is updated, each thread independently completes the updating of the key of the thread.

Further, when the statistically counted key is split in step S2, the sub-key is configured to: firstly, the ID of the target storage bucket is followed by a separator for distinguishing from other sub-keys, and then a universal unique identification code is generated by using the ID of the object storage gateway and the ID of the thread and spliced at the tail end.

Further, sub-keys formed by the shards are scattered on different distributed transaction key value database nodes.

Further, in step S5, when the read operation is executed, the sub-key of the statistical count of the target Bucket in the distributed transaction key value database is traversed by < shard number + Bucket ID >, and is merged.

Further, the step S5 executing the read operation includes: when a client sends a read request for a target storage bucket to an object storage gateway, the object storage gateway forwards the read request to each distributed transaction key value database node at the back end; traversing the sub-key prefixes counted by each node according to the statistics of the target storage Bucket, wherein the sub-key prefixes are partitioned object storage gateway IDs and thread IDs, and then splicing the partitioned object storage gateway IDs and the thread IDs, the results are not returned at the moment, but pushed down to a distributed transaction key value database for co-processing, the traversal results are merged in the distributed transaction key value database, and different sub-key suffixes < object storage gateway IDs + thread IDs >; and each distributed transaction key value database node returns the merging result to the object storage gateway, and secondary merging is carried out in the object storage gateway.

The technical scheme of the invention has the beneficial effects that: on the basis of the flow of synchronous statistics, unnecessary waiting time consumption caused by request queuing in a high-concurrency scene is solved by replacing a back-end storage component (the original single-machine key value storage system is replaced by a distributed transaction key value database in a metadata storage mode of a storage bucket), and queued serial requests are changed into parallel requests; moreover, a counting key is redesigned, and the unique counting key of the original storage barrel is divided into a plurality of sub-keys, so that writing conflict caused by modification of the same key in a high-concurrency scene is avoided; in addition, the sub-keys are fragmented, so that the problem of hot spots can be avoided when the sub-keys are stored in a back-end distributed database.

Drawings

Fig. 1 is a block diagram of an implementation of the method for recording a statistical count of object buckets according to an embodiment of the present invention.

FIG. 2 is a diagram of the sub-key structure of the statistical counting according to the embodiment of the present invention.

FIG. 3 is a flow chart illustrating a recording statistic counting method according to an embodiment of the present invention.

FIG. 4 is a diagram illustrating a synchronous statistical count reading process according to an embodiment of the present invention.

Detailed Description

The invention is further described with reference to the following figures and detailed description of embodiments.

The specific embodiment of the invention provides a method for recording the statistical count of an object bucket, which comprises the following steps: and S1, after the object storage gateway finishes the write operation, synchronously updating the value of the key of the statistical count of the bucket to which the target object belongs. S2, dividing the key of the statistical counting of the target bucket into a plurality of sub-keys, and dividing the whole key space into sub-spaces with appointed number of pieces; after the keys with the statistical count are split, when the write operation is executed, the thread executing the write operation firstly splices the keys needing to be modified according to a predetermined format. S3, the task thread reads the value of the target key from the cache, restores the value after modification, and persists the value to the distributed transaction key value database to which the sub-key of the target key belongs. And S4, after receiving the request for reading the statistic count of the target storage bucket, the object storage gateway splits the processing request for the plurality of sub-keys by combining the storage bucket corresponding to the request and sends the processing request to the distributed transaction key value database at the rear end. And S5, traversing the sub-keys of the statistical count of the target storage bucket by the distributed transaction key value database, merging the sub-keys in the nodes of the distributed transaction key value database, and returning the merged result to the object storage gateway for secondary merging.

Different from the storage mode of the distributed storage Ceph for the metadata, the method of the embodiment of the present invention replaces the metadata storage mode of the bucket with the original OMAP (single machine key value storage system) to the distributed TIKV (distributed transaction key value database), which improves the read/write performance of the metadata to a certain extent, i.e., reduces the extra I/O time consumption generated by the synchronization statistics. In addition, because the back-end storage is replaced, the request is not sent to an OSD (object storage engine), and in a high concurrency scene, the modification of a certain bucket statistic count Key (Key) does not need to be queued, namely serial processing is changed into parallel processing.

However, in a highly concurrent scenario, when a write request is executed, the write request frequently encounters a conflict problem due to the uniqueness of the statistical count, and common solutions include two kinds, namely locking a resource and retrying a conflict. When a resource is locked, there may be a waiting time for acquiring the lock, and the I/O may be blocked during the waiting process, which affects the performance. Retry of a conflict may re-execute the conflicting flow if the resource conflicts, and may also incur additional memory overhead in addition to the blocking of normal I/O flows. Therefore, the method and the device optimize the synchronous counting scheme, split the counting key of the storage barrel into a plurality of sub-keys, and bind the updating of the counting to the corresponding thread, thereby effectively avoiding the writing conflict.

The invention also improves the phenomenon, fragments the statistically counted keys and scatters the statistically counted keys, divides the whole key space (key range) into subspaces with specified fragment number through Hash, and the sub-keys belonging to the same subspace can fall into the same range (interval) or adjacent ranges when writing.

When a write request is processed, because an original single statistical counting key is split into a plurality of sub-keys, when a read request for statistical counting is executed, the split statistical counting sub-keys need to be read in batches, and then merging processing is performed to obtain a final statistical counting result. However, the number of keys is increased due to the splitting of the statistically counted keys, and can reach up to 10W magnitude, and if the key values are all read into the RGW (object storage gateway), the service pressure is increased dramatically. Therefore, the invention pushes down part of the operation of the keys for merging the statistical counts to each TIKV node, each TIKV node traverses the statistical count information related to the target storage bucket, performs primary merging statistics in the TIKV node, and then transmits the result back to the RGW for secondary merging, so that the number of keys for merging in the RGW is equal to the number of the TIKV nodes, the magnitude of the merged keys is reduced in multiples, and the network overhead and the service pressure of the RGW layer are effectively reduced.

Referring to fig. 1, a block diagram of an implementation of the method for recording the statistical count of object buckets according to the present invention is shown. As shown in fig. 1, the client 10 issues an object write request to an object storage gateway (e.g., gateway 1, gateway 2, etc.). When a write request is received, the object storage gateway will dispatch the request to multiple threads, such as: when receiving a write request issued by a client 10, the gateway 1 distributes the write request to a plurality of threads (such as a thread 11, a thread 12, a thread 13, etc.) belonging to the gateway; when receiving a write request issued by the client 10, the gateway 2 distributes the write request to a plurality of threads (e.g., thread 21, thread 22, thread 23, etc.) belonging to it. And after each thread of the object storage gateway finishes the self writing request, generating a key for counting the target storage bucket according to the storage bucket to which the object belongs and the self thread ID. In the object storage gateway layer, in order to fragment the original Key of the statistical count, a unique ID needs to be allocated to each RGW, and the thread in each RGW also has an independent ID in the corresponding RGW, and when the statistical count is updated, each thread independently completes the updating of the Key related to itself.

In order to solve the writing conflict under the scene of high synchronous statistics, the invention redesigns the unique statistical counting Key of the storage bucket and splits the statistical counting Key into a plurality of sub-keys. The structure design of the sub-Key is as follows: firstly, adding a special separator into the ID of a target storage space Bucket, and then generating a UUID (universal unique identifier) by using the ID of an object storage gateway and the ID of a thread and splicing the UUID at the tail end. Since the back end is stored as the distributed database TIKV, there is a problem of writing hot spots, it is necessary to perform fragment processing on these keys, perform modular extraction on the UUID generated in the previous step according to the shard _ num (number of fragments), and splice into the prefix of the sub-Key generated in the previous step. This operation breaks up the statistical count Key, so that it falls into the divided pieces and is distributed to different TIKV nodes. Therefore, the 64-bit UUID generated by the object storage gateway ID and the thread ID in the structure after the statistic count Key is split is scattered according to the number of the split pieces, and the ID of the corresponding Bucket, the special separator for distinguishing the Key and other keys and the UUID are connected, as shown in FIG. 2, the format is < (UUID% clad _ num) + Bucket ID + separate char + UUID >, wherein (UUID% clad _ num) is a prefix, and the Bucket ID represents the ID of the target storage Bucket; the separate char is a special delimiter distinguished from other child keys; the UUID is a universal unique identification code generated by the object storage gateway ID and the thread ID and is positioned at the tail end. When the write operation is executed, the thread executing the write operation splices the sub-keys of the key needing to be modified according to the format of < (UUID% shared _ num) + Bucket ID + separate char + UUID >.

On the basis of adopting the TIKV as a back-end storage component and splitting and Key space fragmentation processing of bucket statistics and counting, a write request flow of recording statistics and counting of the present invention is shown in fig. 3, and includes:

1.1 the client sends a plurality of Put/Delete requests for different objects to the RGW, and the RGW distributes the requests to a plurality of threads (such as Thread 1, Thread 2, …, Thread N) and processes the requests;

1.2 after each thread completes the Object write request, generating a target storage Bucket statistical count Key according to the Bucket to which the Object belongs and the thread ID of the thread;

1.3 each thread reads the statistical count Key of the target bucket in its own statistical count cache and modifies it (for example, modifies the capacity, number, etc.), and writes the result into the TIKV after modification.

When the statistical count of the storage bucket is read, because the statistical count Key is split into a plurality of sub keys, the sub keys need to be read in batches when the statistical count Key is read, and the result needs to be merged. Referring to fig. 4, the specific reading process includes:

2.1 when the client sends a read request to the target storage bucket to the object storage gateway, the object storage gateway forwards the read request to each distributed transaction key value database node at the back end;

2.2, each node traverses the sub-key prefixes counted by the target storage barrel in a statistical manner, wherein the sub-key prefixes are partitioned into the object storage gateway ID and the thread ID, and then the packet ID is spliced, the result is not returned at the moment, but is pushed down to the distributed transaction key value database for co-processing, the traversal result is merged in the distributed transaction key value database, and the suffix of different sub-keys is merged to be < the object storage gateway ID + the thread ID >;

and 2.3, each distributed transaction key value database node returns the merging result to the object storage gateway, and secondary merging is carried out in the object storage gateway.

In conclusion, the invention modifies the synchronous counting scheme, realizes the parallel recording counting under the high concurrency scene, reduces the queuing time of a plurality of writing requests, improves the writing performance, optimizes the influence possibly caused by the reading requests and lightens the service pressure of the reading counting requests.

The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several equivalent substitutions or obvious modifications can be made without departing from the spirit of the invention, and all the properties or uses are considered to be within the scope of the invention.

10页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种高性能的数据湖系统及数据存储方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类