Copy storage method and device, storage medium and computer equipment

文档序号:1921485 发布日期:2021-12-03 浏览:15次 中文

阅读说明:本技术 副本存储方法、装置、存储介质及计算机设备 (Copy storage method and device, storage medium and computer equipment ) 是由 黄福堂 于 2021-08-18 设计创作,主要内容包括:本申请公开了一种副本存储方法、装置、存储介质及计算机设备,属于计算机技术领域。所述方法应用于分布式文件系统,包括:若目标故障组中存储副本的至少一个存储节点发生故障,则确定所覆盖的故障范围,基于所述故障范围,调整所述副本在所述目标故障组中的存储方式,以使所述目标数据分片的状态为完全可用状态,采用本申请,可以在存储节点发生故障时及时调整存储节点上存储的副本的存储方式,避免了存储节点故障对分布式文件系统的读写服务产生影响,保证了分布式文件系统的可用性和可靠性。(The application discloses a copy storage method, a copy storage device, a storage medium and computer equipment, and belongs to the technical field of computers. The method is applied to a distributed file system and comprises the following steps: if at least one storage node storing the copies in the target failure group fails, determining a covered failure range, and adjusting the storage mode of the copies in the target failure group based on the failure range to enable the state of the target data fragments to be a fully available state.)

1. A copy storage method is applied to a distributed file system, wherein the distributed file system comprises a plurality of fault domains, each fault domain comprises at least one virtual fault sub-domain, each virtual fault sub-domain comprises a plurality of storage nodes, a virtual fault sub-domain with the same number as that of copies of a target data fragment is selected from the at least one virtual fault sub-domain to obtain at least one fault group, and the copies of the target data fragment are stored in a target fault group in the at least one fault group, and the method comprises the following steps:

if at least one storage node storing the copy in the target failure group fails, determining a covered failure range;

and adjusting the storage mode of the copy in the target failure group based on the failure range so as to enable the state of the target data fragment to be a fully available state.

2. The method of claim 1, wherein said adjusting the storage of said replica in said target failure group based on said failure scope comprises:

if the fault range is that a part of storage nodes in a virtual fault sub-domain in the target fault group are in fault, storing copies stored on the storage nodes in fault into the storage nodes in the virtual fault sub-domain, wherein the part of storage nodes comprise the storage nodes for storing the copies;

if the fault range is that a part of virtual fault sub-domains in the target fault group have faults, reallocating temporary data fragments, and storing copies of the temporary data fragments to the virtual fault sub-domains which do not have faults in the target fault group.

3. The method of claim 2, further comprising:

if the fault range is that a part of storage nodes in a virtual fault sub-domain in the target fault group are in fault and the part of storage nodes comprises a storage node for storing the copy, adjusting the state of the target data fragment to be a first part available state, wherein the first part available state comprises read-only permission and overwrite;

if the fault range is that a part of storage nodes in a virtual fault sub-domain in the target fault group are in fault and the part of storage nodes comprises two storage nodes for storing the copies, adjusting the state of the target data fragment to be a second part available state, wherein the second part available state comprises read-only permission;

if the fault range is that a part of storage nodes in the virtual fault sub-domain in the target fault group are in fault and the part of storage nodes comprises three storage nodes for storing the copies, the state of the target data fragment is adjusted to be in an unavailable state, and the unavailable state comprises that reading is not allowed, overwriting is not allowed, and new writing is not allowed.

4. The method of claim 2, wherein after storing the copy stored on the failed storage node to the non-failed storage node within the virtual failed sub-domain, further comprising:

and adjusting the state of the target data fragment into a fully available state, wherein the fully available state comprises the permission of reading, the coverage of writing and the new writing.

5. The method of claim 2, further comprising:

if the fault range is that one virtual fault sub-domain in the target fault group has a fault, adjusting the state of the target data fragment to a first part available state, wherein the first part available state comprises read-only permission and overwrite;

and if the fault range is that two virtual fault sub-domains in the target fault group have faults, adjusting the state of the target data fragment to a second part available state, wherein the second part available state comprises read-only permission.

6. The method of claim 5, wherein after storing the copy of the temporary data shards in the non-failed virtual failure sub-domain within the target failure group, further comprising:

if the fault range is that one virtual fault sub-domain in the target fault group has a fault, setting the state of the temporary data fragment to be a fully available state, wherein the fully available state comprises permission of reading, overwriting and new writing;

and if the fault range is that two virtual fault sub-domains in the target fault group have faults, setting the state of the temporary data fragment to be a fully available state, wherein the fully available state comprises permission of reading, overwriting and new writing.

7. The method according to claim 2, wherein, if the failure scope is that a part of virtual failure sub-domains in the target failure group fails, reallocating temporary data fragments, and storing copies of the temporary data fragments in the non-failed virtual failure sub-domains in the target failure group, comprises:

if the fault range is that a part of virtual fault sub-domains in the target fault group have faults, reallocating temporary data fragments, determining storage nodes with the same number as the number of the copies in the virtual fault sub-domains which do not have faults in the target fault group, and storing the copies of the temporary data fragments into the storage nodes.

8. The method of claim 7, wherein determining the same number of storage nodes as the number of copies in the non-failed virtual failure sub-domain in the target failure group comprises:

and sequentially and circularly selecting storage nodes in each non-fault virtual fault subdomain in the target fault group according to a random sequence, and selecting one storage node each time until the number of the selected storage nodes is equal to the number of the copies.

9. The method according to claim 2 or 6, wherein, if the failure scope is that a part of the virtual failure sub-domains in the target failure group fails, reallocating temporary data fragments, and after storing copies of the temporary data fragments in the non-failed virtual failure sub-domains in the target failure group, further comprising:

and if the virtual fault sub-domains with faults in the target fault group are recovered to be normal, storing the copies of the temporary data fragments into the target fault group according to a dispersion principle, wherein one copy is stored in each virtual fault sub-domain in the target fault group.

10. A replica storage apparatus applied to a distributed file system, where the distributed file system includes multiple fault domains, each fault domain includes at least one virtual fault sub-domain, each virtual fault sub-domain includes multiple storage nodes, a virtual fault sub-domain having the same number of replicas of a target data fragment is selected from the at least one virtual fault sub-domain to obtain at least one fault group, and a replica of the target data fragment is stored in a target fault group in the at least one fault group, the apparatus includes:

a failure range determination module, configured to determine a covered failure range if at least one storage node storing the copy in the target failure group fails;

and the copy adjusting module is used for adjusting the storage mode of the copy in the target fault group based on the fault range so as to enable the state of the target data fragment to be a fully available state.

11. A storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, performs the steps of the method of any one of claims 1 to 9.

12. A computer device, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the steps of the method according to any of claims 1-9.

Technical Field

The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for storing a copy, a storage medium, and a computer device.

Background

In the distributed file system, files are stored in each storage node of the distributed file system in units of data fragments, in order to ensure the reliability of the distributed file system, each data fragment is stored on different storage nodes in a multi-copy mode, and the different storage nodes belong to different fault domains. The fault domain is a divided storage area, and the division of the fault domain is to avoid that the same copy stored in a certain storage area is affected when the storage area fails, and the same copy is usually stored in different fault domains.

When a storage node to which a copy of a certain data fragment belongs fails, in order to ensure normal read-write service of the data fragment, the copy stored on the storage node needs to be stored on another normal storage node. The existing scheme for performing copy storage when a storage node fails usually can only perform migration in the failure domain, and the migration strategy is not flexible enough, thereby affecting the reliability and availability of the distributed file system.

Disclosure of Invention

The embodiment of the application provides a copy storage method, a copy storage device, a storage medium and computer equipment, which are applied to a distributed file system, wherein the distributed file system comprises a plurality of fault domains, each fault domain comprises at least one virtual fault sub-domain, each virtual fault sub-domain comprises a plurality of storage nodes, the virtual fault sub-domain with the same number as that of copies of target data fragments is selected from the at least one virtual fault sub-domain to obtain at least one fault group, and the copies of the target data fragments are stored in the target fault group in the at least one fault group. The technical scheme is as follows:

in a first aspect, an embodiment of the present application provides a copy storage method, where the method is applied to a computer device, and the method includes:

if at least one storage node storing the copy in the target failure group fails, determining a covered failure range;

and adjusting the storage mode of the copy in the target failure group based on the failure range so as to enable the state of the target data fragment to be a fully available state.

In a second aspect, an embodiment of the present application provides a copy storage apparatus, including:

a failure range determination module, configured to determine a covered failure range if at least one storage node storing the copy in the target failure group fails;

and the copy adjusting module is used for adjusting the storage mode of the copy in the target fault group based on the fault range so as to enable the state of the target data fragment to be a fully available state.

In a third aspect, embodiments of the present application provide a storage medium having at least one instruction stored thereon, where the at least one instruction is adapted to be loaded by a processor and to perform the above-mentioned method steps.

In a fourth aspect, an embodiment of the present application provides a computer device, which may include: a processor and a memory; wherein the memory stores at least one instruction adapted to be loaded by the processor and to perform the above-mentioned method steps.

The beneficial effects brought by the technical scheme provided by some embodiments of the application at least comprise:

by adopting the copy storage method provided by the embodiment of the application, when at least one storage node storing the copy in one target failure group of the distributed file system fails, the covered failure range is determined firstly, and the storage mode of the copy in the target failure group is adjusted based on the failure range, so that the state of the target data fragment is in a fully available state, the influence of storage node failure on read-write service of the distributed file system is avoided, and the availability and reliability of the distributed file system are ensured.

Drawings

In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.

FIG. 1 is a diagram illustrating a distributed file system in the prior art;

fig. 2 is a system architecture of a distributed file system according to an embodiment of the present application;

fig. 3 is a schematic flowchart of a copy storage method according to an embodiment of the present application;

fig. 4 is a schematic flowchart of a copy storage method according to an embodiment of the present application;

FIG. 5 is a schematic diagram illustrating an example of a copy migration provided by an embodiment of the present application;

fig. 6 is a schematic flowchart of a copy storage method according to an embodiment of the present application;

fig. 7 is a schematic diagram illustrating an example of allocating a temporary data slice according to an embodiment of the present application;

FIG. 8 is a flowchart of a copy storage method provided by another exemplary embodiment of the present application;

FIG. 9 is a flowchart of a copy storage method provided by another exemplary embodiment of the present application;

FIG. 10 is a schematic structural diagram of a copy storage apparatus according to an embodiment of the present application

Fig. 11 is a schematic structural diagram of a computer device according to an embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present application, it is noted that, unless explicitly stated or limited otherwise, "including" and "having" and any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art. Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.

Before describing the embodiments of the present invention more clearly, some concepts of the present invention will be described in detail to better understand the present invention.

Distributed File System (DFS): distributed file system means that the physical storage resources managed by the file system are not necessarily directly connected to the local node, but are connected to the node (which may be understood simply as a computer) via a computer network. The design of the distributed file system is based on a client/server model. A typical network may include multiple servers for access by multiple users.

Fault domain (ZONE), a separate area where copies of a data slice are stored, and for a data slice, only one copy of the data slice can be stored in each fault domain.

The storage node (data node) is a data storage node, a data node for storing service files, and an object storage device can be regarded as a storage node.

Virtual fault sub-domain (node set) is a set of storage nodes, and a plurality of object storage devices form a virtual fault sub-domain, and a fault domain can comprise a plurality of virtual fault sub-domains.

Failure Group (Group) several virtual failure sub-domains located in different failure domains constitute a failure Group.

The original file system is only used for local data service in a local area network, and with the arrival of the information age, data acquired by a user exponentially increases, and the mode of expanding the storage capacity of the file system by simply increasing the number of hard disks is poor in performance in the aspects of capacity size, capacity increase speed, data backup, data safety and the like. Distributed file systems have emerged, which extend the range of services across the entire network. The method not only changes the storage and management mode of the data, but also has the advantages of data backup, data safety and the like which cannot be achieved by a local file system.

Distributed file systems refer to file systems that manage physical storage resources that are not necessarily directly linked to a local node, but rather are connected to the node through a computer network, the design of a distributed file system being based on a client/server model, a typical network may include multiple servers for access by multiple users. The distributed file system can effectively solve the storage and management problems of data: a certain file system fixed at a certain place is expanded to any multiple places/multiple file systems, and a plurality of storage nodes form a file system network. Each storage node may be distributed at different locations, and communication and data transfer between the storage nodes may be performed via a network. When the user uses the distributed file system, the user does not need to care about which storage node the data is stored on or obtained from, and only needs to manage and store the data in the file system as if the local file system is used.

When the data volume in a single file is too large, a problem that the single storage node cannot store the data exists, so that in the distributed file system, the whole file is cut into a plurality of small data fragments, and the small data fragments are respectively stored on different storage nodes. In order to ensure high reliability of the distributed file system, the data fragments are stored in a redundant mode, namely, each data fragment stores a plurality of copies to different storage nodes, and the storage nodes stored by the plurality of copies of each data fragment are isolated according to the set fault domain, so that the distributed file system is ensured to have better availability and reliability. The fault domain is a storage area which is divided artificially, and the fault domain is divided to avoid that the same copy data stored in a certain storage area is influenced when the certain storage area fails, so that the same copy data is stored in different fault domains, and the availability and the reliability of the distributed file system can be effectively improved.

Thus, the architecture of the distributed file system is: a distributed file system includes a plurality of failure domains, each failure domain including a plurality of storage nodes, each storage node having stored thereon a plurality of copies of a data slice. The fault domain may be specifically divided according to the system scale and the security level of the distributed file system, for example, a data center may be used as the fault domain, a computer room may be used as the fault domain, a rack may be used as the fault domain, and the storage node may be regarded as an object storage device. Referring to fig. 1, fig. 1 is a schematic diagram illustrating a distributed file system in the prior art. As shown in fig. 1, the number of copies of each data fragment in the distributed file system shown in fig. 1 is three, and the distributed file system shown in fig. 1 includes three computer rooms, one computer room is a fault domain, each computer room includes a plurality of object storage devices, each object storage device is a storage node, that is, three object storage devices storing three copies of one data fragment are located in different computer rooms.

In a distributed file system, the probability of losing all copies of a data fragment is generally used as a basis for judging the reliability of the distributed file system, and the probability formula of losing R copies of a data fragment is known as follows:wherein R represents the number of copies, PrThe probability that the R storage nodes storing the R copies fail at the same time is represented, N represents the number of the storage nodes in the distributed file system, and M represents the number of the distribution conditions of the R copies of one data fragment in the distributed file system.

Based on this, a new distributed file system architecture is further proposed, where the distributed file system includes multiple fault domains, each fault domain includes at least one virtual fault sub-domain, each virtual fault sub-domain includes multiple storage nodes, a virtual fault sub-domain with the same number as that of copies of target data fragments is selected from the at least one virtual fault sub-domain to obtain at least one fault group, and the copies of the target data fragments are stored in a target fault group in the at least one fault group. Referring to fig. 2, fig. 2 is a system architecture of a distributed file system according to an embodiment of the present disclosure.

As shown in fig. 2, the distributed file system includes four machine rooms, each machine room includes three object storage device sets, each object storage device set includes eight object storage devices, that is, each machine room is used as a fault domain, the object storage device set in each machine room is used as a virtual fault sub-domain, the object storage device in each storage device set is a storage node, one virtual fault sub-domain is selected from each fault domain to form a fault group, and three copies of one data fragment are stored in any storage node in the three virtual fault sub-domains in one fault group. The distributed file system shown in fig. 2 divides a small virtual fault sub-domain based on the first-level fault domain, so that the value of M in the probability formula of losing R copies of the data fragments is reduced, that is, the probability of losing R copies of the data fragments in the distributed file system is reduced, and the reliability of the distributed file system is improved.

On the basis of the architecture of the distributed file system shown in fig. 2, the application provides a copy storage method, when a storage node fails, a failure range is determined first, and then the storage mode of a copy in a failure group is adjusted according to the failure range, so that the failure is prevented from affecting the normal read-write service of the distributed file system, and the availability and the reliability of the distributed file system are fully ensured.

For convenience of description, the following embodiments are all described by taking a computer device as an example of a smart phone. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims. The flow diagrams depicted in the figures are merely exemplary and need not be performed in the order of the steps shown. For example, some steps are parallel, and there is no strict sequence relationship in logic, so the actual execution sequence is variable.

Please refer to fig. 3, which is a flowchart illustrating a copy storage method according to an embodiment of the present disclosure. The copy storage method is applied to a distributed file system, the distributed file system comprises a plurality of fault domains, each fault domain comprises at least one virtual fault sub-domain, each virtual fault sub-domain comprises a plurality of storage nodes, the number of virtual fault sub-domains which is the same as that of copies of target data fragments is selected from the at least one virtual fault sub-domain to obtain at least one fault group, and the copies of the target data fragments are stored in the target fault group of the at least one fault group. As shown in fig. 3, the copy storage method may include the following steps S101 to S102.

S101, if at least one storage node storing the copy in the target failure group fails, determining a covered failure range;

specifically, whether a fault occurs in the distributed file system is detected in real time by adopting a heartbeat detection mechanism, and when a fault occurs in a certain storage node, a fault range of the current fault is determined, wherein the fault range can be that a single storage node in a target fault group has a fault, a plurality of storage nodes have faults, one virtual fault sub-domain has a fault or a plurality of virtual fault sub-domains have faults.

The single storage node failure may be a failure of any storage node in any virtual failure sub-domain in the target failure group; the failure of the plurality of storage nodes may be a failure of a plurality of storage nodes in one or several virtual failure sub-domains in the target failure group; the failure of the virtual failure sub-domain may be a failure of all storage nodes in the virtual failure sub-domain or a failure of the failure domain where the virtual failure sub-domain is located, for example, in a distributed file system where one failure domain is a machine room, the virtual failure sub-domain may be a rack in the machine room, and if the virtual failure sub-domain fails, that is, if one rack in the machine room fails or the whole machine room fails; the multiple virtual fault sub-domains may be faults of fault domains corresponding to the multiple virtual fault sub-domains, and one fault domain is a machine room, and the virtual fault sub-domains are distributed file systems of racks in the machine room, that is, multiple machine rooms are faulty.

S102, based on the fault range, adjusting the storage mode of the copy in the target fault group to enable the state of the target data fragment to be a fully available state;

specifically, the storage manner of the copy in the target failure group is adjusted in different manners according to the failure range, so that the state of the target data fragment is a fully available state, and the target data fragment is a concept of generic reference, which does not simply represent a certain data fragment in the distributed file system, but can refer to any data fragment in the distributed file system. If the fault range is that a single storage node in the target fault group fails and a plurality of storage nodes fail, storing the copy stored on the failed storage node into the non-failed storage node in the virtual fault sub-domain; if the fault range is that a part of virtual fault sub-domains in the target fault group have faults, the target data fragments have copy loss, part of services of the target data fragments are affected and do not belong to a fully available state, the temporary data fragments are redistributed, the copies of the temporary data fragments are stored in the virtual fault sub-domains which do not have faults in the target fault group, and the temporary data fragments are used for taking over the affected services of the target data fragments.

In a possible implementation manner, if the failure range is that one storage node in a certain virtual failure sub-domain in the target failure group fails, a copy of the target data fragment stored in the failed storage node is stored in a normal storage node that does not fail in the virtual failure sub-domain.

In a possible implementation manner, if the failure range is that a certain virtual failure sub-domain within the target failure group fails, the temporary data fragments are reallocated, and a copy of the temporary data fragments is stored in the virtual failure sub-domain in which no failure occurs in the target failure group, where the temporary data fragments are used to take over the affected service functions of the target data fragments.

In the embodiment of the application, when at least one storage node storing the copy in one target failure group of the distributed file system fails, a covered failure range is determined first, and based on the failure range, the storage mode of the copy in the target failure group is adjusted to enable the state of the target data fragment to be a fully available state, so that the influence of storage node failure on read-write service of the distributed file system is avoided, and the availability and reliability of the distributed file system are ensured.

In one embodiment, the copy configuration amount of the distributed file system may be 3. The following embodiments mainly describe a copy storage method when a storage node fails when the copy configuration amount is 3.

Please refer to fig. 4, which is a flowchart illustrating a copy storage method according to an embodiment of the present disclosure. The copy storage method is applied to a distributed file system, the distributed file system comprises a plurality of fault domains, each fault domain comprises at least one virtual fault sub-domain, each virtual fault sub-domain comprises a plurality of storage nodes, the number of virtual fault sub-domains which is the same as that of copies of target data fragments is selected from the at least one virtual fault sub-domain to obtain at least one fault group, and the copies of the target data fragments are stored in the target fault group of the at least one fault group. As shown in fig. 4, the copy storage method may include the following steps.

S201, if at least one storage node storing the copy in the target failure group fails, determining a covered failure range;

specifically, whether a fault occurs in the distributed file system is detected in real time by adopting a heartbeat detection mechanism, and when a fault occurs in a certain storage node, a fault range of the current fault is determined, wherein the fault range can be that a single storage node in a target fault group has a fault, a plurality of storage nodes have faults, one virtual fault sub-domain has a fault or a plurality of virtual fault sub-domains have faults.

The heartbeat detection mechanism sends heartbeat messages to each storage node of a virtual fault subdomain in each fault domain of the distributed file system in real time, if message reply can be received, the storage node which replies the messages is proved not to have faults, and if message reply cannot be received, the storage node which does not reply the messages is proved to have faults.

S202, if the fault range is that a part of storage nodes in a virtual fault sub-domain in a target fault group are in fault, storing copies stored on the storage nodes in fault into the storage nodes which are not in fault in the virtual fault sub-domain, wherein the part of storage nodes comprise the storage nodes for storing the copies;

specifically, when a part of storage nodes in a virtual fault sub-domain in a target fault group have a fault, the copy stored on the storage node with the fault is stored in the storage node without the fault in the same virtual fault sub-domain.

In the distributed file system with the copy configuration amount of 3, the target failure group includes three virtual failure sub-domains, and a failure of a part of the storage nodes in the virtual failure sub-domains in the target failure group may be a failure of one or more storage nodes in one of the virtual failure sub-domains in the target failure group, or a failure of one or more storage nodes in each of two virtual failure sub-domains in the target failure group, or a failure of one or more storage nodes in each of three virtual failure sub-domains in the target failure group. As can be seen from the foregoing, three copies of one target data fragment are stored in three virtual failure sub-domains of the target failure group, and one storage node in each virtual failure sub-domain stores one copy of one target data fragment. The failure of a partial storage node in the virtual failure sub-domain in the target failure group may correspond to three cases where the target data fragmentation copy is lost, which may correspond to steps S2021 to S2023.

S2021, if the fault range is that a part of storage nodes in a virtual fault sub-domain in the target fault group have faults and the part of storage nodes comprise a storage node for storing a copy, adjusting the state of the target data fragment into a first part of available state, wherein the first part of available state comprises that only reading and overwriting are allowed;

specifically, if the partial storage nodes include a storage node storing the copy, the copy loss number of the target data fragment is 1, and since the copy number of the target data fragment does not satisfy the preset copy configuration amount, the original copy configuration amount is 3, and the existing available copy number is 2, the state of the target data fragment is adjusted to a first partial available state, where the first partial available state includes only allowing read and overwrite.

S2022, if the fault range is that a part of storage nodes in the virtual fault sub-domain in the target fault group have faults and the part of storage nodes comprise two storage nodes for storing copies, adjusting the state of the target data fragment to be a second part available state, wherein the second part available state comprises read-only permission;

specifically, if the partial storage nodes include two storage nodes storing the copies, the number of lost copies of the target data fragment is 2, and since the number of copies of the target data fragment does not satisfy the preset copy configuration amount, the original copy configuration amount is 3, and the number of available copies is 1, the state of the target data fragment is adjusted to a second partial available state, where the second partial available state includes read-only permission.

S2023, if the fault range is that a part of storage nodes in the virtual fault sub-domain in the target fault group have faults and the part of storage nodes comprise three storage nodes for storing copies, adjusting the state of the target data fragment into an unavailable state, wherein the unavailable state comprises that reading is not allowed, overwriting is not allowed, and new writing is not allowed;

specifically, if the partial storage nodes include three storage nodes storing the copies, the number of lost copies of the target data fragment is 3, since the number of copies of the target data fragment does not satisfy the preset copy configuration amount, the original copy configuration amount is 3, and the number of available copies is 0, all copies of the target data fragment are lost, and the state of the target data fragment is adjusted to an unavailable state, where the unavailable state includes no permission for reading, no permission for overwriting, and no permission for new writing.

It can be understood that, in the distributed file system, in order to ensure high reliability of the distributed file system, the target data fragment is stored by using a multi-copy policy, where the number of copies may represent the security level of the target data fragment, for example, in the embodiment of the present application, if a 3-copy storage policy is used, the security level of the corresponding target data fragment is 3. When a part of the 3 copies fail, in order to continuously ensure that the security level of the target data fragment reaches 3, the failed copies are repaired, and new writing or overwriting of data may be performed on the target data fragment during the copy repair, the new writing or overwriting occurs on the copy which does not fail, for the copy which has failed and is under repair, data synchronization between the copies needs to be performed after the copy repair is completed, and the task complexity of performing data synchronization on the new writing task is higher than that of performing data synchronization on the overwriting writing task. Therefore, in order to reduce the task complexity of data synchronization of the target data fragment from the time when a part of the copies fail to the time when the copies are repaired, the state of the target data fragment is usually adjusted when the copies fail, in the embodiment of the present application, in the distributed file system with the copy configuration amount of 3, when 1 copy fails, new writing to the target data fragment is limited, that is, the state of the target data fragment is adjusted to only allow reading and overwriting; when 2 copies fail, limiting new writing and overwriting of the target data fragment, namely adjusting the state of the target data fragment to be read-only; when 3 copies fail, no available copy exists, and no operation can be executed on the target data fragment, and the state of the target data fragment is adjusted to be read-disallowed, overwrite-disallowed and new-write-disallowed.

S203, adjusting the state of the target data fragment to be a fully available state, wherein the fully available state comprises the permission of reading, the overwriting and the new writing.

Specifically, in step S203, after step S202, that is, after the copy stored on the failed storage node is stored in the non-failed storage node in the virtual failure sub-domain, the three copies of the target data fragment are restored to be normally available, and the state of the target data fragment is adjusted to be a fully available state, where the fully available state includes permission of reading, overwriting, and new writing.

Please refer to fig. 5, which is a schematic diagram illustrating an example of copy migration according to an embodiment of the present application. As shown in fig. 5, in the target failure group, the target failure group shown in fig. 5 includes a virtual failure sub-domain 1, a virtual failure sub-domain 2, and a virtual failure sub-domain 3, copies of target data fragments are stored on one object storage device 2 in the virtual failure sub-domain 1, one object storage device 6 in the virtual failure sub-domain 2, and on one object storage device 4 in the other virtual failure sub-domain 3, and at this time, the object storage devices 1, the object storage devices 2, and the object storage devices 3 in the virtual failure sub-domain 1 fail, a copy of a target data fragment that should be originally stored on the object storage device 2 may be stored on the object storage device 6 in the virtual failure sub-domain 1, i.e., a storage node in fig. 5.

In the embodiment of the application, when at least one storage node storing a copy in a target failure group of a distributed file system fails, a covered failure range is determined first, and when the failure range is that a part of storage nodes in a virtual failure sub-domain in the target failure group fails and the available state of a target data fragment is affected, the copy stored on the failed storage node is stored in the storage node which does not fail in the same virtual failure sub-domain, so that the target data fragment is restored to a fully available state, and the availability of the distributed file system is ensured.

Fig. 6 is a schematic flowchart of a copy storage method according to an embodiment of the present application. The copy storage method is applied to a distributed file system, the distributed file system comprises a plurality of fault domains, each fault domain comprises at least one virtual fault sub-domain, each virtual fault sub-domain comprises a plurality of storage nodes, the number of virtual fault sub-domains which is the same as that of copies of target data fragments is selected from the at least one virtual fault sub-domain to obtain at least one fault group, and the copies of the target data fragments are stored in the target fault group of the at least one fault group. As shown in fig. 6, the copy storage method may include the following steps.

S301, if at least one storage node storing the copy in the target failure group fails, determining a covered failure range;

specifically, whether a fault occurs in the distributed file system is detected in real time by adopting a heartbeat detection mechanism, and when a fault occurs in a certain storage node, a fault range of the current fault is determined, wherein the fault range can be that a single storage node in a target fault group has a fault, a plurality of storage nodes have faults, one virtual fault sub-domain has a fault or a plurality of virtual fault sub-domains have faults.

The heartbeat detection mechanism sends heartbeat messages to each storage node of a virtual fault subdomain in each fault domain of the distributed file system in real time, if message reply can be received, the storage node which replies the messages is proved not to have faults, and if message reply cannot be received, the storage node which does not reply the messages is proved to have faults.

S302, if the fault range is that a part of virtual fault sub-domains in the target fault group have faults, the temporary data fragments are redistributed, storage nodes with the same number as the number of the copies are determined in the virtual fault sub-domains without faults in the target fault group, and the copies of the temporary data fragments are stored in the storage nodes;

it is understood that if a part of virtual fault sub-domains in a target fault group have faults in a fault range, copies of target data fragments stored in the faulted virtual fault sub-domains are lost, and the lost copies cannot be recovered in the faulted virtual fault sub-domains, three copies of the temporary data fragments are stored in the rest virtual fault sub-domains in the target fault group without faults by adopting a mode of reallocating the temporary data fragments. The virtual fault sub-domain failure may be a single fault of the virtual fault sub-domain or an entire fault of the fault domain to which the virtual fault sub-domain belongs.

Specifically, if the failure range is that a part of virtual failure sub-domains in the target failure group have failures, the temporary data fragments are redistributed, storage nodes are selected in turn in each non-failed virtual failure sub-domain in the target failure group according to a random sequence, one storage node is selected each time until the number of the selected storage nodes is equal to the number of the copies, and the copies of the temporary data fragments are stored in the selected storage nodes.

The temporary data fragment can take over the service content which can not be realized due to the loss of the target data fragment copy. For example, if the copy of the target data fragment is lost, and the new write service for the target data fragment cannot continue to perform new write on the target data fragment, the temporary data fragment replaces the new write service for the target data fragment.

And S303, if the virtual fault subdomains with faults in the target fault group are recovered to be normal, storing the copies of the temporary data fragments into the target fault group according to a dispersion principle, wherein one copy is stored in each virtual fault subdomain in the target fault group.

Specifically, if the failed virtual fault sub-domain is restored and then returns to normal, the copies of the temporary data fragments are stored in all the virtual fault sub-domains in the target fault group according to the dispersion principle, and each virtual fault sub-domain stores one copy of the temporary data fragment.

Please refer to fig. 7, which is a schematic diagram illustrating an example of allocating a temporary data slice according to an embodiment of the present application. As shown in fig. 7, in the target failure group shown in fig. 7, including the virtual failure sub-domain 1, the virtual failure sub-domain 2 and the virtual failure sub-domain 3, copies of target data fragments are stored one on the object storage devices 2 in the virtual failure sub-domain 1, one on the object storage devices 6 in the virtual failure sub-domain 2, and one on the object storage devices 4 in the other virtual failure sub-domain 3, when the virtual failure sub-domain 1 fails, the temporary data fragments may be reallocated, and three copies of the temporary data fragments may be stored on the object storage devices 2 in the non-failed virtual failure sub-domain 2, the object storage devices 3, and the object storage devices 5 in the virtual failure sub-domain 3.

Further, after the copies of the temporary data fragments are reallocated in the target failure group, the state of the target data fragments is adjusted to a fully available state, and the state of the temporary data fragments is adjusted to a first partially available state, wherein the fully available state includes read-only permission, overwrite write and new write, and the first partially available state includes read-only permission and overwrite write.

It is understood that the temporary data fragment is a data fragment that is temporarily allocated when the target data fragment fails, and is used to take over a new write service of the target data fragment when the target data fragment fails, that is, during the failure of the target data fragment, data that is newly written to the target data fragment is stored in the temporary data fragment, and after the copy of the target data fragment is repaired, the target data fragment can continue to perform the new write service, and the new write service of the temporary data fragment is closed, so that the data storage continues to maintain the original rule. Thus, the state of the temporary data slice is adjusted to allow only read and overwrite.

In the embodiment of the application, when at least one storage node storing the copy in one target failure group of the distributed file system fails, a covered failure range is determined firstly, and if the failure range is that a part of virtual failure sub-domains in the target failure group fails, the copy of the temporary data fragment is stored into the virtual failure sub-domain which does not fail by distributing the temporary data fragment, so that the temporary data fragment replaces the service of the target data fragment affected by the loss of the copy, and the availability of the distributed file system is ensured; when the faulted virtual fault subdomain is recovered to be normal, the storage mode of the copy of the temporary data fragment is adjusted in the target fault group according to the dispersion principle, and the reliability of the distributed file system is further ensured.

Based on the scheme disclosed in the embodiment of fig. 6, in the process from the failure of the virtual failure sub-domain to the failure recovery, the available states of the target data fragment and the temporary data fragment should be adjusted, please refer to the following embodiments.

Referring to fig. 8, fig. 8 is a flowchart of a copy storage method according to another exemplary embodiment of the present application. The copy storage method can be applied to the system shown in fig. 2. The copy storage method is applied to a distributed file system, the distributed file system comprises a plurality of fault domains, each fault domain comprises at least one virtual fault sub-domain, each virtual fault sub-domain comprises a plurality of storage nodes, the number of virtual fault sub-domains which is the same as that of copies of target data fragments is selected from the at least one virtual fault sub-domain to obtain at least one fault group, and the copies of the target data fragments are stored in the target fault group of the at least one fault group. As shown in fig. 8, the copy storage method includes:

s401, if at least one storage node storing the copy in the target failure group fails, determining a covered failure range;

s402, if the fault range is that a virtual fault sub-domain in the target fault group has a fault, adjusting the state of the target data fragment into a first part available state, wherein the first part available state comprises read-only permission and overwrite;

it is understood that a copy of the target data fragment is stored in each virtual failure subdomain in the target failure group, if the failure range is that one virtual failure subdomain in the target failure group fails, it indicates that the copy loss number of the target data fragment is 1, and in the distributed file system with the copy configuration amount of 3, the state of the target data fragment needs to be adjusted to only allow reading and overwriting.

S403, reallocating the temporary data fragments, determining storage nodes with the same number as the number of the copies in the virtual fault subdomains without faults in the target fault group, and storing the copies of the temporary data fragments into the storage nodes;

s404, setting the state of the temporary data fragment into a fully available state, wherein the fully available state comprises permission of reading, overwriting and new writing;

s405, if the virtual fault sub-domains with faults in the target fault group are recovered to be normal, the copies of the temporary data fragments are stored in the target fault group according to a dispersion principle, and one copy is stored in each virtual fault sub-domain in the target fault group.

In the embodiment of the application, when one virtual fault subdomain in a target fault group has a fault, the target data fragment is adjusted to be in a first available state only allowing reading and overwriting, then the temporary data fragment is allocated, a copy of the temporary data fragment is stored in the virtual fault subdomain which does not have the fault in the target fault group, and the temporary data fragment is set to be in a fully available state allowing reading, overwriting and new writing, so that the temporary data fragment replaces new writing service of the target data fragment.

Referring to fig. 9, fig. 9 is a flowchart of a copy storage method according to another exemplary embodiment of the present application. The copy storage method can be applied to the system shown in fig. 2. The copy storage method is applied to a distributed file system, the distributed file system comprises a plurality of fault domains, each fault domain comprises at least one virtual fault sub-domain, each virtual fault sub-domain comprises a plurality of storage nodes, the number of virtual fault sub-domains which is the same as that of copies of target data fragments is selected from the at least one virtual fault sub-domain to obtain at least one fault group, and the copies of the target data fragments are stored in the target fault group of the at least one fault group. As shown in fig. 9, the copy storage method includes:

s501, if at least one storage node storing the copy in the target failure group fails, determining a covered failure range;

s502, if the fault range is that two virtual fault sub-domains in the target fault group have faults, the state of the target data fragment is adjusted to be a second part available state, and the second part available state comprises read-only permission;

it is understood that one copy of the target data fragment is stored in each virtual fault subdomain in the target fault group, if the fault range is that two virtual fault subdomains in the target fault group have faults, it indicates that the number of copies of the target data fragment lost is 2, and in a distributed file system with a copy configuration amount of 3, the state of the target data fragment needs to be adjusted to be only read-allowed.

S503, reallocating the temporary data fragments, determining storage nodes with the same number as the number of the copies in the virtual fault subdomains which do not have faults in the target fault group, and storing the copies of the temporary data fragments into the storage nodes;

s504, setting the state of the temporary data fragment into a fully available state, wherein the fully available state comprises read permission, overwriting and new writing;

and S505, if the virtual fault sub-domains with faults in the target fault group are recovered to be normal, storing the copy of the temporary data fragments into the target fault group according to a dispersion principle, wherein one copy is stored in each virtual fault sub-domain in the target fault group.

In the embodiment of the application, when two virtual fault sub-domains in a target fault group have faults, target data fragments are adjusted to be in a second available state only allowing reading, then temporary data fragments are distributed, copies of the temporary data fragments are stored in the virtual fault sub-domains without faults in the target fault group, and the temporary data fragments are set to be in a fully available state allowing reading, allowing overwriting and allowing new writing, so that the temporary data fragments replace new writing services of the target data fragments.

Fig. 10 is a schematic structural diagram of a copy storage device according to an embodiment of the present application. As shown in fig. 10, the replica storage apparatus 1 can be implemented by software, hardware, or a combination of both as all or a part of a computer device. According to some embodiments, the replica storage apparatus 1 includes a failure range determining module 11 and a replica adjusting module 12, and specifically includes:

a failure range determining module 11, configured to determine a covered failure range if at least one storage node storing the copy in the target failure group fails;

and a duplicate adjustment module 12, configured to adjust, based on the failure range, a storage manner of the duplicate in the target failure group, so that the state of the target data slice is a fully available state.

Optionally, the copy adjusting module 12 includes:

a first copy adjusting unit 121, configured to, if the failure range is that a part of storage nodes in a virtual failure sub-domain in the target failure group fails, store a copy stored on the failed storage node into a storage node that does not fail in the virtual failure sub-domain, where the part of storage nodes includes the storage node that stores the copy;

a second copy adjusting unit 122, configured to reallocate the temporary data fragments if the failure range is that a part of the virtual failure sub-domains in the target failure group fails, and store a copy of the temporary data fragments in the virtual failure sub-domains in the target failure group that do not fail.

Optionally, the copy adjusting module 12 further includes a first state adjusting unit 123, where the first state adjusting unit 123 is specifically configured to:

if the fault range is that a part of storage nodes in a virtual fault sub-domain in the target fault group are in fault and the part of storage nodes comprises a storage node for storing the copy, adjusting the state of the target data fragment to be a first part available state, wherein the first part available state comprises read-only permission and overwrite;

if the fault range is that a part of storage nodes in a virtual fault sub-domain in the target fault group are in fault and the part of storage nodes comprises two storage nodes for storing the copies, adjusting the state of the target data fragment to be a second part available state, wherein the second part available state comprises read-only permission;

if the fault range is that a part of storage nodes in the virtual fault sub-domain in the target fault group are in fault and the part of storage nodes comprises three storage nodes for storing the copies, adjusting the state of the target data fragment to be in an unavailable state, wherein the unavailable state comprises reading forbidding, overwriting and new writing.

Optionally, the copy adjusting module 12 further includes a second state adjusting unit 124, where the second state adjusting unit 124 is specifically configured to:

and adjusting the state of the target data fragment into a fully available state, wherein the fully available state comprises the permission of reading, the coverage of writing and the new writing.

Optionally, the copy adjusting module 12 further includes a third state adjusting unit 125, where the third state adjusting unit 125 is specifically configured to:

if the fault range is that one virtual fault sub-domain in the target fault group has a fault, adjusting the state of the target data fragment to a first part available state, wherein the first part available state comprises read-only permission and overwrite;

and if the fault range is that two virtual fault sub-domains in the target fault group have faults, adjusting the state of the target data fragment to a second part available state, wherein the second part available state comprises read-only permission.

Optionally, the copy adjusting module 12 further includes a fourth state adjusting unit 126, where the fourth state adjusting unit 126 is specifically configured to:

if the fault range is that one virtual fault sub-domain in the target fault group has a fault, setting the state of the temporary data fragment to be a fully available state;

and if the fault range is that two virtual fault sub-domains in the target fault group have faults, setting the state of the temporary data fragment to be a fully available state.

Optionally, the second copy adjusting unit 122 is specifically configured to:

if the fault range is that a part of virtual fault sub-domains in the target fault group have faults, reallocating temporary data fragments, determining storage nodes with the same number as the number of the copies in the virtual fault sub-domains which do not have faults in the target fault group, and storing the copies of the temporary data fragments into the storage nodes.

Optionally, the second copy adjusting unit 122 is specifically configured to:

and sequentially and circularly selecting storage nodes in each non-fault virtual fault subdomain in the target fault group according to a random sequence, and selecting one storage node each time until the number of the selected storage nodes is equal to the number of the copies.

Optionally, the duplicate adjustment module 12 further includes a failure recovery unit 127, where the failure recovery unit 127 is specifically configured to:

and if the virtual fault sub-domains with faults in the target fault group are recovered to be normal, storing the copies of the temporary data fragments into the target fault group according to a dispersion principle, wherein one copy is stored in each virtual fault sub-domain in the target fault group.

The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.

In the embodiment of the application, when at least one storage node storing the copy in one target failure group of the distributed file system fails, a covered failure range is determined first, and based on the failure range, the storage mode of the copy in the target failure group is adjusted to enable the state of the target data fragment to be a fully available state, so that the influence of storage node failure on read-write service of the distributed file system is avoided, and the availability and reliability of the distributed file system are ensured.

An embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, and the instructions are suitable for being loaded by a processor and executing the copy storage method according to the embodiment shown in fig. 1 to 9, and a specific execution process may refer to specific descriptions of the embodiment shown in fig. 1 to 9, which is not described herein again.

The present application further provides a computer program product, where at least one instruction is stored in the computer program product, where the at least one instruction is loaded by the processor and executes the copy storage method according to the embodiment shown in fig. 1 to 9, and a specific execution process may refer to specific descriptions of the embodiment shown in fig. 1 to 9, which is not described herein again.

Referring to fig. 11, a block diagram of a computer device according to an exemplary embodiment of the present application is shown. The computer device in the present application may comprise one or more of the following components: a processor 110, a memory 120, an input device 130, an output device 140, and a bus 150. The processor 110, memory 120, input device 130, and output device 140 may be connected by a bus 150.

Processor 110 may include one or more processing cores. The processor 110 connects various parts within the overall computer device using various interfaces and lines, and performs various functions of the computer device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), field-programmable gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.

The Memory 120 may include a Random Access Memory (RAM) or a read-only Memory (ROM). Optionally, the memory 120 includes a non-transitory computer-readable medium. The memory 120 may be used to store instructions, programs, code sets, or instruction sets.

The input device 130 is used for receiving input instructions or data, and the input device 130 includes, but is not limited to, a keyboard, a mouse, a camera, a microphone, or a touch device. The output device 140 is used for outputting instructions or data, and the output device 140 includes, but is not limited to, a display device, a speaker, and the like. In the embodiment of the present application, the input device 130 may be a temperature sensor for acquiring the operating temperature of the computer device. The output device 140 may be a speaker for outputting audio signals.

In addition, those skilled in the art will appreciate that the configurations of the computer apparatus shown in the above-described figures do not constitute limitations on the computer apparatus, and that a computer apparatus may include more or less components than those shown, or some of the components may be combined, or a different arrangement of components. For example, the computer device further includes a radio frequency circuit, an input unit, a sensor, an audio circuit, a wireless fidelity (WiFi) module, a power supply, a bluetooth module, and other components, which are not described herein again.

In the embodiment of the present application, the execution subject of each step may be the computer device described above. Optionally, the execution subject of each step is an operating system of the computer device. The operating system may be an android system, an IOS system, or another operating system, which is not limited in this embodiment of the present application.

In the computer device shown in fig. 11, the processor 110 may be configured to call the copy storage program stored in the memory 120 and execute the copy storage program to implement the copy storage method according to the method embodiments of the present application.

In the embodiment of the application, when at least one storage node storing the copy in one target failure group of the distributed file system fails, a covered failure range is determined first, and based on the failure range, the storage mode of the copy in the target failure group is adjusted to enable the state of the target data fragment to be a fully available state, so that the influence of storage node failure on read-write service of the distributed file system is avoided, and the availability and reliability of the distributed file system are ensured.

It is clear to a person skilled in the art that the solution of the present application can be implemented by means of software and/or hardware. The "unit" and "module" in this specification refer to software and/or hardware that can perform a specific function independently or in cooperation with other components, where the hardware may be, for example, a Field-ProgrammaBLE Gate Array (FPGA), an Integrated Circuit (IC), or the like.

It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.

In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.

In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some service interfaces, devices or units, and may be an electrical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program, which is stored in a computer-readable memory, and the memory may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.

The above description is only an exemplary embodiment of the present disclosure, and the scope of the present disclosure should not be limited thereby. That is, all equivalent changes and modifications made in accordance with the teachings of the present disclosure are intended to be included within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

24页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于操作存储系统的方法和机器可读存储器

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!