Method and system for consistent distributed memory pools in blockchain networks

文档序号:1510747 发布日期:2020-02-07 浏览:4次 中文

阅读说明:本技术 用于区块链网络中一致分布式内存池的方法和系统 (Method and system for consistent distributed memory pools in blockchain networks ) 是由 G·德斯蒂法尼斯 S·马蒂奥 P·莫蒂林斯基 S·文森特 于 2018-06-19 设计创作,主要内容包括:可提供一种计算机实现的方法。可使用诸如比特币网络这样的区块链网络来实现。所述计算机实现的方法包括:在实现分布式散列表(DHT)的分布式内存池网络(DMP)的内存池节点处,接收更新内存池节点的路由信息的请求;在DHT的关键字空间内将随机游走的集合初始化,随机游走的集合用于生成将通过内存池节点存储的键-值记录的集合;至少部分地基于第一识别信息在第一表中生成第一记录,第一识别信息存储在长距离表类型的第二表中;通过至少进行随机游走的集合的第一随机游走,在长距离表类型的第三表中生成第二记录,第二记录包含第二识别信息以及与其相关联的地址;通过至少从DMP网络的第二内存池节点所维持的表中获得记录,在短距离表类型的第四表中生成第三记录。(A computer-implemented method may be provided. May be implemented using a blockchain network such as a bitcoin network. The computer-implemented method includes: receiving, at a memory pool node of a distributed memory pool network (DMP) implementing a Distributed Hash Table (DHT), a request to update routing information of the memory pool node; initializing a set of random walks in a keyword space of the DHT, wherein the set of random walks is used for generating a set of key-value records to be stored through a memory pool node; generating a first record in a first table based at least in part on first identifying information, the first identifying information stored in a second table of the long-range table type; generating a second record in a third table of the long-distance table type by at least making a first random walk of the set of random walks, the second record containing second identification information and an address associated therewith; a third record is generated in a fourth table of the short-range table type by obtaining records from at least a table maintained by a second memory pool node of the DMP network.)

1. A computer-implemented method, comprising:

receiving, at a memory pool node of a distributed memory pool network (DMP) for implementing a Distributed Hash Table (DHT), a request to update routing information of the memory pool node;

initializing a set of random walks in a key space of the DHT, the set of random walks being used for generating a set of key-value records to be stored by the memory pool node;

generating a first record in a first table based at least in part on first identifying information stored in a second table of the long-range table type;

generating a second record in a third table of the long-range table type by at least performing a first random walk of the set of random walks, the second record containing second identifying information and an address associated with the second identifying information;

generating a third record in a fourth table of a short-range table type by obtaining records from at least a table maintained by a second memory pool node of the DMP network.

2. The computer-implemented method of claim 1, further comprising:

receiving, at a memory pool node, a request for a value associated with a key;

selecting third identification information from a fifth table of a long-range table type with respect to at least a part of other identification information included in the fifth table;

selecting a particular iteration of a long-range table among the set of long-range tables of which the third table is a member;

selecting a pointer in a range between the third identification information and the keyword from a particular iteration of the long-range table; and

transmitting a query to a third memory pool node associated with the pointer.

3. The computer-implemented method of claim 2, wherein the request to update routing information is part of an update operation performed over a large number of iterations.

4. The computer-implemented method of claim 2 or 3, wherein the update operation is terminated after completion of the number of iterations, the number of iterations being defined by a DHT protocol.

5. The computer-implemented method of any of the preceding claims, further comprising creating a set of trusted connections between the memory pool node and a set of other memory pool nodes.

6. The computer-implemented method of any of claims 2 to 5, wherein the second record in the second table is pseudo-randomly selected.

7. The computer-implemented method of any of claims 2 to 6, wherein the first record in the ID table is selected from information generated during initialization of the set of random walks.

8. The computer-implemented method of any of claims 2 to 7, wherein selecting the pointer from a particular iteration of the long-range table that ranges between the third identifying information and the keyword is performed randomly.

9. The computer-implemented method of any of claims 2 to 8, wherein the request to update the routing information of the memory pool node is a result of an instantiation of a new memory pool node.

10. The computer-implemented method of any of claims 2 to 9, wherein the set of trusted connections is modified.

11. The computer-implemented method of any of claims 2 to 10, wherein the request to update the routing information of the memory pool node is a result of a failure of at least one other memory pool node.

12. The computer-implemented method of any of claims 2 to 11, wherein the request to update routing information for the memory pool node is a result of modifying a set of keys associated with the DMP.

13. The computer-implemented method of any of claims 2 to 12, wherein the request to update routing information for the memory pool node is a result of modifying the set of trusted connections.

14. The computer-implemented method of any of claims 2 to 13, wherein the DMP obtains a plurality of storage requests from a set of verifier nodes.

15. The computer-implemented method of any of claims 2 to 14, wherein the memory pool node maintains a set of weights associated with a set of connections to a set of other memory pool nodes.

16. The computer-implemented method of any of claims 2 to 15, wherein the memory pool node updates weights in the set of weights due to at least one other memory pool node failing to provide a valid key-value record.

17. The computer-implemented method of any of claims 2 to 16, wherein the memory pool nodes of the DMP implement a consensus protocol to provide consensus among a plurality of memory pool nodes of the DMP regarding a set of values included in a fourth table.

18. The computer-implemented method of any of claims 2 to 17, wherein the memory pool node stores a second value that is also stored by at least one other memory pool node of the DMP.

19. A system, comprising:

a processor; and

a memory comprising executable instructions that, as a result of being executed by the processor, cause the system to carry out the computer-implemented method of any one of claims 1 to 18.

20. A non-transitory computer-readable storage medium having stored thereon executable instructions that, as a result of execution by the processor, cause the computer system to perform at least the computer-implemented method of any of claims 1-18.

Technical Field

The present invention relates generally to computer-based storage and transfer techniques. The present invention also relates to distributed hash tables, and more particularly, to methods and apparatus for improving security and consistency for storing and retrieving shared information. The invention is particularly suitable for, but not limited to, use in blockchain applications and transfers of blockchain implementations. The present invention also relates to increasing the speed of operation over blockchain networks by enabling faster read and write operations, and to improving the security mechanisms for blockchain enforcement systems by preventing or reducing attacks and/or malicious activity (e.g., routing and storage attacks, such as in distributed memory pools).

Background

In this document, we use the term "blockchain" to include all forms of electronic, computer-based distributed ledgers. They include consensus-based blockchain and transaction chain techniques, licensed and unlicensed ledgers, shared ledgers, and variations thereof. The most widely known application of blockchain technology is the bitcoin ledger, although other blockchain implementations have also been proposed and developed. Although reference may be made herein to bitcoin for purposes of convenience and illustration, it should be noted that the present invention is not limited to use with bitcoin blockchains, and alternative blockchain implementations and protocols are also within the scope of the present invention.

While blockchain techniques are well known for the use of cryptocurrency implementations, digital entrepreneurs have begun exploring both the cryptosecurity systems on which bitcoins are based and the use of data that can be stored on blockchains to implement the new systems. It would be advantageous if blockchains could be used for automation tasks and processes that are not limited to the field of cryptocurrency. These solutions will be able to exploit the benefits of blockchains (e.g., permanence of events, tamper-resistant logging, distributed processing, etc.) while being more versatile in their application.

A blockchain is a peer-to-peer electronic ledger that is implemented as a computer-based decentralized, distributed system consisting of blocks that, in turn, consist of blockchain transactions. Each blockchain transaction is a data structure that encodes the transfer of control of a digital asset or record chain between participants in the blockchain system and includes at least one input and at least one output. Each chunk contains a hash of the previous chunk, so these chunks become linked together to create a permanent, unalterable record of all blockchain transactions that have been written to the blockchain since the beginning of the blockchain. Blockchain transactions contain applets, called scripts, embedded in their inputs and outputs that specify how and by whom the output of the blockchain transaction is accessed. On bitcoin platforms, these scripts are written using a stack-based scripting language.

In order to write a blockchain transaction to a blockchain, it must be "verified". The network node (miners) work to ensure that every blockchain transaction is valid, while invalid transactions are rejected from the network. The software client installed on the node performs this verification work on the unconsumed blockchain transaction (UTXO) by executing its lock and unlock scripts. If the execution of the lock and unlock script evaluates to TRUE, the blockchain transaction is valid and written to the blockchain. Thus, in order to write a blockchain transaction to a blockchain, it must i) be verified by the first node that receives the blockchain transaction-if the blockchain transaction is verified, the node relays it to other nodes in the network; ii) adding new blocks built by miners; and iii) mining, i.e., joining public ledgers of past blockchain transactions.

A network node receiving a new blockchain transaction will quickly attempt to push the blockchain transaction to other nodes in the network. The new blockchain transaction needs to be "verified" before it is transmitted to other nodes, meaning that it is checked against a set of criteria to ensure that it meets the basic requirements for the appropriate blockchain transaction according to the applicable blockchain protocol.

To write blockchain transactions to a blockchain, a node ("miner") merges them into one block, which is designed to collect blockchain transactions and form them into blocks. The miners then attempt to complete the "proof of workload" for the node. Miners in the entire blockchain network compete as the first person to assemble a blockchain transaction block and complete the proof of the associated workload for that block. A successful miner adds its identified chunk to the blockchain and the chunk is propagated through the network so that other nodes that maintain a copy of the blockchain can update their records. Those nodes that receive the block also "validate" the block and all blockchain transactions therein to ensure that it conforms to the formal requirements of the protocol.

One of the bottlenecks associated with blockchain implementations is the delay associated with waiting for miners to complete a workload certificate that validates and causes the blocks of the blockchain transaction to be added to the blockchain. Taking the bitcoin system as an example, it takes about 10 minutes in design for the system to identify a block and add it to the blockchain. At the same time, the unacknowledged blockchain transactions are accumulated in a memory pool (hereinafter "memory pool"), with a near-complete copy of the memory pool maintained at each node in the network. Analysis of the bitcoin architecture shows that with its 10 minute blockchain validation throughput, the system can process transaction throughputs for approximately 3 new unconfirmed blockchain transactions per second based on typical blockchain transactions and the size of the blockchain and the speed at which these accumulated unconfirmed blockchain transactions are merged into a new blockchain.

It would be advantageous to use a blockchain based network (e.g., bitcoin) to implement or facilitate the use of extensive, cryptographically protected exchanges. Such an exchange may involve, for example, payment processing such as a credit card transaction, although the invention is not limited to financial-oriented applications. However, a transaction throughput of about 3 times per second is insufficient to process such electronic payments, which are currently operated at about 50,000 transactions per second. It is therefore desirable to find a solution to the speed and scalability constraints that currently limit the ability of the blockchain to handle large numbers of transactions.

Disclosure of Invention

Such a solution has now been devised. Thus, according to the present invention, there is provided a method and apparatus as defined in the appended claims.

In accordance with the present invention, methods and apparatus may be provided that enable fast propagation (including storage and retrieval) of blockchain transactions or other data through a network of memory pool nodes (distributed memory pool or DMP) designed to implement the functionality of a Distributed Hash Table (DHT), as described in more detail below. A DMP implementing a DHT may include a plurality of memory pool nodes distributed over a network, such as a business network (e.g., a DMP network) or other network. In some embodiments, the DMP network stores the data as a record that includes a key and a value.

As described in more detail below, the memory pool nodes form a distributed system that provides for the storage and retrieval of (key, value) pairs, where any participating memory pool node can efficiently retrieve the value associated with a particular key. In some embodiments, the memory pool nodes are also responsible for determining routing information and establishing paths and/or connections between the memory pool nodes. The data stored as a "value" of a (key, value) pair may include any data described in this disclosure that can be stored by a computer system that includes blockchain transactions.

In addition or alternatively, the present disclosure describes a memory pool network node to facilitate fast allocation of blockchain transactions over a network of interconnected nodes for DHT implementing blockchain transactions, a subset of which are memory pool network nodes interconnected by an overlay network. In the following, the term "memory pool network node" may be used interchangeably with the term "memory pool node" for convenience only.

Thus, according to the present invention, there may be provided a computer-implemented method (and corresponding system (s)) as defined in the appended claims.

The method may be described as a method for implementing DHT using memory pool nodes. The computer-implemented method includes: receiving, at a memory pool node of a distributed memory pool network (DMP) that implements a Distributed Hash Table (DHT), a request to update routing information of the memory pool node; initializing a set of random walks in a key space of the DHT, the set of random walks being used for generating a set of key-value records (key-value records) to be stored by a memory pool node; generating a first record in a first table based at least in part on first identifying information, the first identifying information stored in a second table of the long-range table type; generating a second record in a third table of the long-distance table type by at least performing a first random walk of the set of random walks, the second record containing second identification information and an address associated with the second identification information; a third record is generated in a fourth table of the short-range table type by obtaining records from at least a table maintained by a second memory pool node of the DMP network.

Drawings

These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described herein. Embodiments of the invention are described below, by way of example only, and with reference to the accompanying drawings, in which:

fig. 1 illustrates an exemplary node network of an overlay network having memory pool nodes, according to an embodiment;

FIG. 2 illustrates an exemplary network of memory pool nodes implementing DHT, according to embodiments;

FIG. 3 illustrates an exemplary network of memory pool nodes implementing operations for updating routing information of memory pool nodes of a DMP network, according to embodiments;

FIG. 4 illustrates an exemplary network of memory pool nodes implementing a retrieval operation of a DMP network, according to an embodiment;

fig. 5 illustrates a flow diagram of an exemplary process for updating routing information for memory pool nodes of a DMP network, according to an embodiment;

FIG. 6 illustrates a flow diagram of an exemplary process for retrieving information from a memory pool node of a DMP network, according to an embodiment;

FIG. 7 illustrates a schematic diagram of an exemplary environment in which various embodiments may be implemented.

Detailed Description

Referring initially to fig. 1, fig. 1 illustrates, in block diagram form, an exemplary network associated with a blockchain, referred to herein as blockchain network 100. Blockchain network 100 is a peer-to-peer open member network that anyone can join without invitations and/or consent from other members. A distributed electronic device running an instance of a blockchain protocol under which the blockchain network 100 operates may participate in the blockchain network 100. Such distributed electronic devices may be referred to as nodes 102. The blockchain protocol may be, for example, bitcoin protocol or other cryptocurrency. In other embodiments, the blockchain network 100 may be a storage network running an instance of a DHT protocol, such as described in more detail in conjunction with fig. 2.

The electronic devices that run the blockchain protocol and form the nodes 102 of the blockchain network 100 may be of various types including, for example, computers such as desktop computers, laptop computers, tablet computers, servers, mobile devices such as smartphones, wearable computers such as smartwatches, or other electronic devices.

The nodes 102 of the blockchain network 100 are coupled to each other using suitable communication techniques, which may include wired and wireless communication techniques. In many cases, the blockchain network 100 is implemented at least partially over the internet, and a portion of the nodes 102 may be located at geographically dispersed locations.

Node 102 maintains a global ledger of all blockchain transactions across the blockchain, which is divided into a plurality of blocks, each block containing a hash value of a previous block in the chain. The global ledger is a distributed ledger and each node 102 may store a complete copy or a partial copy of the global ledger. Blockchain transactions conducted by the node 102 that affect the global ledger are validated by other nodes 102, thereby maintaining the validity of the global ledger. Those skilled in the art will understand the details of implementing and operating a blockchain network (e.g., using the bitcoin protocol).

Each blockchain transaction typically has one or more inputs and one or more outputs. Scripts embedded in the input and output specify how and by whom the output of the blockchain transaction is accessed. The output of the blockchain transaction may be the address to which the value was transferred due to the blockchain transaction. The value is then associated with the output address as an unconsumed blockchain transaction output (UTXO). Subsequent blockchain transactions may then reference the address as input to unlock the value.

The nodes 102 may be of different types or classes depending on their functionality. It has been proposed that four basic functions associated with the node 102 are: wallet, mining, complete blockchain maintenance, and network routing. These functions may be different. The node 102 may have more than one of these functions. For example, "full node" provides all four functions. Lightweight nodes, such as may be implemented in a digital wallet, for example, may have only wallet and network routing functionality. Instead of storing the entire chain of blocks, the digital wallet may track the block header, which is used as an index when querying the block. The nodes 102 communicate with each other using a connection-oriented protocol such as TCP/IP (transmission control protocol). In various embodiments, the memory pool node 104 is not a member of the blockchain network and is not responsible for implementing the bitcoin/blockchain protocol.

The present application proposes and describes additional types or classes of nodes: the memory pool nodes 104. In various embodiments, the memory pool nodes 104 provide for the rapid propagation and retrieval of blockchain transactions. In such an embodiment, the memory pool nodes 104 do not store the complete block chain, nor do they perform mining functions. In this sense, they are similar to lightweight nodes or wallets; however, they include additional functionality that enables rapid propagation and retrieval of blockchain transactions. The operational focus of the memory pool nodes 104 is the propagation and retrieval of data stored in the DHT performed through the DMP network, and in particular in the case of a blockchain implementation, the propagation of unacknowledged blockchain transactions to other memory pool nodes 104, whereby the unacknowledged blockchain transactions are pushed quickly from other memory pool nodes 104 to other nodes 102 in the blockchain network 100.

The memory pool nodes 104 may be collectively referred to as DMP's 106. In various embodiments, the memory pool nodes 104 and DMPs 106 are implemented as part of a business network, as described in this disclosure. Although shown, for ease of illustration, the memory pool nodes 104 may be integrated into the blockchain network 100 as a clear network of entities in fig. 1. That is, the DMPs 106 may be considered to be subnets within the blockchain network 100 and distributed throughout the blockchain network 100. The memory pool node 104 may perform one or more dedicated functions or services.

In order for the DMP106 to operate reliably and to be able to provide services with a certain level of security, the memory pool nodes 104 need to maintain a good profile of the DMP106 and therefore need to establish an efficient routing protocol. Whenever a memory pool node 104 receives an update or acquire request (as described in more detail below), the memory pool node 104 may need to broadcast information to several other memory pool nodes 104 as well as to other nodes 102. In the context of DMP106, this equates to finding a solution to the multiple traveler problem (MTSP). There are many solutions to this problem, and any one solution may be employed in DMP 106. The memory pool nodes 104 each run route optimization in some up-to-date fashion.

In some embodiments, the DMP106 is implemented as a decentralized IP multicast type network.

In one exemplary case, a regular node 102 on the blockchain network 100 generates blockchain transactions to be stored by the DMP 106. It may send the blockchain transaction to memory pool node 104, which memory pool node 104 then broadcasts to other memory pool nodes 104, or, if it knows the IP address of memory pool node 104, may send the blockchain transaction directly to multiple memory pool nodes 104. In some examples, all memory pool nodes 104 of the DMP106 are members of a single address, so all block chain transactions sent to that address are received by all memory pool nodes 104; however, in some cases, there may be more than one address associated with the DMP106, and the receiving memory pool node 104 may evaluate whether the blockchain transaction needs to be further broadcast to other addresses based on the routing information to propagate the blockchain transaction to the complete DMP 106.

Each node 102 in the network 100 typically maintains a separate memory pool containing unconfirmed blockchain transactions that it has seen and has not been incorporated into the blockchain through mineworker completion workload proofs. A significant increase in the number of blockchain transactions from use in the payment process will increase the number of blockchain transactions to be stored in each memory pool. Thus, while nodes in the DMP106 are able to receive new blockchain transactions at approximately the same time, they may have storage capacity limitations relative to large and rapidly changing memory pools.

To address this issue, the present application proposes that the memory pool node 104 use a shared memory pool implemented through the DMP 106.

Assuming an average size of 500 bytes for blockchain Transactions (TX), and a transaction rate of 10TX/s, DMP106 may receive 400GB of daily incoming data. The data is stored in the memory pool for unacknowledged blockchain transactions for different amounts of time. Thus, DMP106 requires a large amount of storage and the ability to store data quickly. In order not to impose too much requirements on each individual memory pool node 104, the memory pool nodes 104 implement a shared memory pool, such as the DMP 106. Instead of having each memory pool node 104 keep all incoming TX's in its own memory pool, each memory pool node 104 stores only a certain portion of the total and has a hash of the rest of the key-value associations.

DMP is a decentralized, distributed system that allows membership partitioning of a set of keywords among nodes and is able to send messages to the owner of a given keyword only in an efficient and optimized manner. Each node of the network can be seen as one element of a hash table array. DMP is designed to manage a large number of nodes, allowing the following possibilities: new nodes can join the network and old nodes can leave and crash without compromising the integrity of the shared data. DMP ensures decentralization (no central authority nor central coordination), scalability (system has the efficient behavior of millions of nodes), and fault tolerance (system is reliable and can manage nodes joining and leaving the network or crashes). Each node of the network may only stay in contact with a small number of other nodes, so that the network is not overloaded when changes occur or new data segments occur.

The same concept can be applied to a UTXO database that contains a set of all the unconsumed outputs on the blockchain. The UTXO database may be built using DMP implementing DHT to share content among a set of nodes.

The DHT protocol, described in more detail below, provides a solution to the above-mentioned problems. In particular, the DHT protocol provides global consensus on the validity of blockchain transactions stored in memory pool node 104, the consistency and integrity of stored data, provides fast propagation and retrieval operations, and prevents routing and storage attacks in memory pool node 104.

One factor to consider in measuring robust DHTs is the number of copies required to ensure robustness and reliability of the overall network. As mentioned above, the fact that nodes can join and leave the network should not affect the availability of data. If the node storing blockchain transaction a leaves the network, it is necessary to find blockchain transaction a in another part of the network. For example, in existing blockchain networks (e.g., bitcoins), the number of blockchain copies of the network is equal to the number of full nodes in the network (on average 5000 copies), but this fact affects scalability.

In the presently proposed DMP106, rather than completely replicating a memory pool on each memory pool node 104, the memory pool is implemented by the DMP 106. To provide reliability, the DMP106 may be implemented with some overlap; that is, each blockchain transaction data item is replicated in more than one memory pool node 104 (although not in each memory pool node 104). As an example, a DMP may be implemented to specify a minimum number of 2 copies. Assuming complete independence between nodes as

Figure BDA0002329017410000081

This situation results in the possibility of reducing two nodes at a time in a given time.

FIG. 2 shows in schematic form a rootThe exemplary blockchain network 200 according to an embodiment includes a verifier node 208 (e.g., a node such as one of the nodes 102 shown in connection with fig. 1) and a memory pool node 204. In various embodiments, a particular blockchain transaction (T)i)202 are processed by one or more verifier nodes 208 and cause one or more memory pool nodes 204 to store Ti202. Block chain transaction Ti202 may include blockchain transactions as described above. However, the storage of transaction information is only one embodiment of the DHT protocol described in this disclosure. The protocol described in this disclosure can be used for consistent and secure storage and retrieval of any data suitable for storage in a DMP.

The verifier node 208 may apply various criteria to verify the incoming Ti202. E.g., depending on the computing resources and/or T of the executing verifier node 208i202, which provides the computational parallelism that can be provided, new blockchain transactions can be cached before scheduling full batches of any size for validation. In addition, the verifier node 208 may be responsible for verifying the format of blockchain transactions or other requests directed to the memory pool node 204. In other embodiments, the verifier node 208 determines Ti202 has been verified (e.g., based at least in part on a blockchain protocol) by one or more other computer systems.

The memory pool node 204 provides a decentralized storage service that implements various operations, such as a lookup operation, where the operation "lookup up (k)", returns data associated with the key "k". In addition, the memory pool node 204 implements a peer-to-peer system according to the DHT protocol to implement the DMP 106. DMP106 provides a common identifier space (e.g., a set of all possible outcomes of a key space or hash function) for both memory pool nodes 204 and keys, each key k being stored in memory pool node 204 with an identifier closest to k according to a distance function. As described in more detail below, the keys are not stored in the shared space using a deterministic function (e.g., a hash function). In various embodiments, this may prevent or reduce the effectiveness of a malicious node performing a brute force attack to generate an output that results in a malicious key between two keys.

In some embodiments, to locate a particular memory pool node associated with k, memory pool node 204 forwards the lookup request to other peers whose identifiers are closer to k according to a distance function. In addition, a memory pool node 204 may maintain links to a subset of other memory pool nodes 204. As described in more detail in connection with fig. 6, in accordance with the routing protocol proposed by the present disclosure, the lookup request is forwarded by memory pool node 204 until either a memory pool node associated with k is found, or no node with an identifier closer to the requested key is found.

In the DMP106, keys are distributed among potentially very many memory pool nodes 104. In various embodiments, each memory pool node 204 contains information (e.g., network address, key information, physical location, etc.) that can identify a subset of the other memory pool nodes 204, so that the lookup can be routed deterministically to the memory pool node 204 responsible for the requested key. This limited view of the membership of DMP106 may provide greater scalability to the system.

The DMP106 is implemented using the DHT protocol described in this disclosure. In various embodiments, the DHT protocol provides two operations: 1) a store operation and 2) a retrieve operation. Given blockchain transaction TiAn identifier id (T) can be assignedi) Comprises the following steps: idi=H(Ti) Where H represents a hash function. In various embodiments, the identifier (id)i) Will be used as a key in the DMP 106. Also as described above, a particular address may be used so that blockchain transactions may be received from multiple verifier nodes 208. Each verifier node 208 may then independently validate the blockchain transaction Ti

As shown in fig. 2, each verifier node 208 maintains a local database 214. In the embodiment shown in FIG. 2, the entries of local database 214 contain the identifier id of the blockchain transactioniAnd for at TiA binary decision of the verification operation performed above, e.g., "0" for rejection, "1" for acceptance, or "Y" for acceptance or "N" for rejection. This information can be used to quickly respond to authentication or retrieval queries. In some embodiments, multiple verifier nodes208 will independently validate TiAnd then based at least in part on the verification TiAs a result, the plurality of verifier nodes 208 will send a store (T) to the DMP106i,idi) Request or reject (id)i) And (6) requesting. In these embodiments, the message is stored such that TiStored in and idiOn the associated memory pool node 204, and the reject message causes the memory pool node 204 to reject the inclusion of the idiThe request of (1). Furthermore, if there are N/2+1 stores (T)i,idi) The request is received, then the memory pool node 204 may store only TiWhere N is the number of verifier nodes 208.

In other embodiments, no reject message is used, and once memory pool node 204 receives the first store (T)i,idi) Memory pool node 204 activates a timer for receiving more than N/2 store messages. In these embodiments, if a time out has expired and not enough stored messages have been received, the blockchain transaction is discarded. In addition, each memory pool node 204 may be TiThe number of stored messages received is maintained at greater than N/2+ 1. These mechanisms can be used to maintain quorum requirements and/or ensure consensus among quorums. In some embodiments, the memory pool node 204 maintains a database 210 or other data structure for storing the bulk of the verifications received from the verifier node 208 for a given blockchain transaction.

In various embodiments of the DHT protocol described in this disclosure, a validated blockchain transaction TiThe search requires broadcasting an idiTo send it to at least one memory pool node 204. Responsible for idiThe specific memory pool node of the storage sends the inclusion and the id to the query nodeiA message of associated data. In this embodiment, if M memory pool nodes are needed to store TiThen the querying node receives at least M/2+1 messages (including and id)iAssociated data) to treat the stored blockchain transactions as coherent. In addition, the querying node may interrogate N/2+1 random verifier nodes 208 to determine T from information stored in the local database 214 of the verifier node 208iIs provided withHigh effect.

As described in more detail below in conjunction with FIG. 3, the key-value pairs of the DMP106 may be stored in a table 210 maintained by the memory pool node 204. The table 210 may contain information associated with the DHT protocol, such as the various functions described above. Although the verifier node 208 and the memory pool node 204 are shown in fig. 2 as distinct entities, in various embodiments one or more of the memory pool node 204 and/or the verifier node 208 may be executed by the same entity, such as a computer system. For example, as described above in connection with FIG. 1, the merchant node executes the components of the blockchain protocol. In other embodiments, the DMP206 is implemented without the verifier node 208. In one example of these embodiments, Ti202 are stored directly by the memory pool node 204 without any authentication operations. In another example, the memory pool node 204 itself may be processing Ti202 is preceded by a verification operation.

Fig. 3 schematically illustrates an exemplary embodiment 300 of updating routing information for a set of memory pool nodes of a DMP network. The exemplary embodiment 300 shown in fig. 3 illustrates a new memory pool node Sx304 existing memory pool node Sy308 and another existing memory pool node Sz306. As mentioned above, the DHT protocol relies on a network of trust relationships between memory pool nodes. Generating for a particular memory pool node (e.g., S) may be based at least in part on a set of both routing information and application level informationx) The set of connections of (2). In various embodiments, issuance or storage of trust certificates does not involve central authorization. In these embodiments, each memory pool node maintains a record of trusted peers with respect to the memory pool node. In various other embodiments, the memory pool node S is stored, for example, during an update processx304 is not a new memory pool node.

Given a plurality of memory pool nodes n in a network (e.g., DMP 106), the number r of routing table records per memory pool node can be measured (or measured) according to:

Figure BDA0002329017410000121

as described above, the DHT protocol defines two functions and/or operations-an update operation and an acquire operation. In various embodiments, the update operation is used to build a routing table and insert a key at each memory pool node. As described in more detail below in conjunction with FIG. 4, the fetch operation is performed by a particular node (e.g., S)x) For finding the target key-value pair represented by key k (e.g., a record in the DHT).

In various embodiments, a particular memory pool node (e.g., S) is identified by a public keyx). For example, memory pool node Sx304 identification information is represented by Sx304 public key PxAnd current IP address addrxThey are represented by Signx (P)x,addrx) The defined next records are linked securely, wherein Signx() Indicates the use of and Sx304 associated private key. The signature record may then be used to store the node identification information in the DMP. In some embodiments, to peer Sy308 sending a message, memory pool node Sx304 use identifier PyTo query the DMP. Then, the memory pool node Sx304 use Sy308 public key PyVerifying any records returned in response to the query (e.g., (P)y,addry) A signature of). Once memory pool node Sx304 validating the information, Sx304 directly to SyAddr of 308yAnd sending the message. In these embodiments, when a memory pool node changes location or receives a new network address, a new record is generated and stored in the DMP.

If the above-described fetch operation has a high probability (relative to other protocols) of returning a correct value, the DHT protocol can be considered robust against Sybil attacks, even if there is a malicious action on both the update and fetch operations. In various embodiments, the DHT protocol has the following characteristics to reduce the effectiveness of a Sybil attack or other attack: minimizing routing table information to reduce spatial complexity, thereby avoiding problems such as overload due to a large number of wrong nodes; the new node generates its own routing information by collecting information from peers, thereby reducing the effect of malicious nodes repeatedly sending wrong information, and the keyword space is uncertain to avoid cluster attacks.

The routing of data in a DMP may be represented by an undirected graph. Given a plurality of malicious edges g and a plurality of memory pool nodes n, in some embodiments, the attribute g < n (much lower than n) guarantees an operable network of DMP 106. Although an attacker may create any number of DMPs 106, it may be difficult for a malicious node to create an edge (e.g., a network connection or relationship) through a memory pool node. For example, a random walk starting from a memory pool node is likely to end up at another memory pool node rather than at a malicious node. However, because the originating memory pool node may not be able to determine which returned messages originated from malicious nodes, no assumptions can be made about the authenticity of the messages. These properties are described in detail in "SybilLimit: ANear-optical Social Network Defence Against Sybil adapters, IEEE/ACCT ransductions on Networking, vol.18, No.3(2010) and F.R.K.Chung Spectral graphics comparative interest Board of the Mathematical Sciences, No.92 (1994)", by Haifeng Yu et al, which are incorporated herein by reference to the same extent as if each reference was individually and specifically indicated to be incorporated by reference and given in its entirety in this disclosure.

As shown in FIG. 3, Sx304 may generate routing information by making random walks 332. In the examples, SxThe 304 routing table is generated by initiating r independent random walks of length w in the DMP. R independent random walks of length w may be generated and the information retrieved from the random walks stored in a table Tab _ RND (S)x)316, respectively. Using these random walks, Sx304 may collect one or more random key-value records from each randomly walked recipient memory pool node. This information may then be stored in the table Tab _ DIST (S)xI)312, where i represents a routing iteration.

As described in more detail below, this routing information is used during lookup operations (e.g., acquisition). If S isx304 Tab _ CLOSE (S) of local surfacexI)314 table does not contain a key, a lookup message is sent to Sx304 nodes (e.g., Tab _ DIST (S) in the routing tablexI) 312). Depending on the value of r, one or more memory pool nodes may locally store the key-value record of the query (e.g., in the Tab _ CLOSE table of a particular node). In addition, the protocol may define rn、rdOr rcWherein r isnIs Tab _ RND (S)x) Number of elements of 316, rdIs Tab _ DIST (S)xI) number of elements of 312, and rcIs Tab _ CLOSE (S)xAnd i) the number of elements. In various embodiments, the value rn、rdOr rcMay be the same as r.

In various embodiments, the routing information is generated by only making a plurality of random walks. In addition, the random walk distribution can be balanced due to the different number of edges between different memory pool nodes. Thus, if each physical memory pool node is self-replicating according to the number of edges, the total number of keys stored in the system will be mK, where m is the honest number of edges, K is the number of keys stored per memory pool node, and the number of independent random walks will be measured in terms of O (√ mK).

In some embodiments, a global circular dictionary order of keywords is used. For example, given any index a<i<b, key word kiAt two nodes xaAnd xbIn a portion of the space therebetween. As described above, keys are not stored in the shared key space using a deterministic function (e.g., a hash function) to prevent malicious nodes from forcing the desired output and inserting a wrong key between two honest keys.

As shown in fig. 3, node Sx304 generate a long-distance routing table Tab _ DIST (S)xI)312, containing r pointers to other memory pool nodes, whose identification information is uniformly distributed over the key space. In these embodiments, the pointers are stored by initiating r random walks and collecting random identification information from terminating memory pool nodes. Short-range routing table Tab _ CLOSE (S)xI)314 contains r key-value records. The keywords are according to a dictionaryThe next 304 in the sequence that is closest identifies the information.

According to the routing model, each node uses random walks to select random keys, providing an even distribution of pointers over the key space. Furthermore, cluster attacks can be prevented by recursively expanding the routing model. In various embodiments, at a given iteration i, each memory pool node is from the previous (i-1) th distant table 336 (e.g., Tab _ DIST (S)xI-1)) and uses the pointer to select the identification information to be stored in the Tab _ ID (S)x)310, α recursion degrees ensure that each memory pool node is located at position α and short distance tables are collected for α different identification information in the key spacemK) The degree of recursion is measured.

As shown in FIG. 3, the update operation includes two phases. In the first phase, the node Sx304 initiate random walks to collect data to be stored in Tab _ RND (S)x) Key-value record in 316. In the second phase, i for the update operation<α, generates other routing tables Tab _ ID (S)x)310、Tab_DIST(SxI)312 and Tab _ CLOSE (S)xI) 314. In various embodiments, the number of entries in each table may be limited to a certain number of total entries. Returning to fig. 3, during a routing iteration, memory pool identification information may be recursively selected based at least in part on a current routing iteration.

Further, as described above, the second step can be performed byxRandom identification information is selected from records of 304(i-1) TAB _ DIST tables to create Tab _ ID (S)x)312, record. In some embodiments, the first entry is from Tab _ RND (S)x)316 table random key. Further, in an embodiment, by proceeding to memory pool node 334 (e.g., node S)y308) And collects the address of the resulting memory pool node and the ith identification information from its Tab _ ID table 320 to create Tab _ DIST (S)xI)312 recording. Finally, the Tab _ RND (S) may be based, at least in part, on the Tab _ RND 318 by at least requesting a record of the Tab _ RNDx)316 to generate Tab _ CLOSE (S)xI) recording of 314, Tab _ RND 318 versus following recognition at a given iteration iAt least one other key of the information is close in dictionary order.

In various embodiments, the queries required to collect routing information are independent and randomly distributed in the keyword space near the specific identifying information.

FIG. 4 schematically illustrates an exemplary embodiment 400 of a fetch operation by a memory pool node of a DMP implementing a DHT. The exemplary embodiment 400 shown in fig. 4 illustrates the connection to an existing memory pool node Sy408 and a memory pool node Sz406 contact to locate the node S in the DMPz406 maintained memory pool node S of target recordsx404. As described above, the DHT protocol relies on a network of trusted relationships (e.g., connections) between memory pool nodes. May be generated as described above for a particular memory pool node (e.g., S)x) Includes a table (e.g., Tab _ DIST 412) containing a set of records indicating memory pool nodes distributed in a key space and network addresses associated with the memory pool nodes.

In various embodiments, retrieving the target record includes finding a route pointer in the long-range routing table that has the target memory pool node in its short-range routing table 418. As described in more detail in connection with fig. 6, the process may be iterative, so the fetch operation causes the computing resource performing the fetch operation to follow multiple route pointers that point to successive long-range routing tables, up to the target memory pool node having its target record in short-range routing table 418.

As described above, the update operation may ensure that certain long distance pointers in the entire key space maintain routing information about the target record and/or target key. Further, at least some of the long-range points may not be associated with a malicious entity. As shown in fig. 4, the slave node S firstx404 initiated key-value retrieval procedure (e.g., a fetch operation) begins with pointing to S for routingx404 Tab _ DIST 412 table. If the target record cannot be reached (e.g., the route pointer or other information is not included in Tab _ DIST 412), then execution may proceed to node Sy308 to repeat the processThe process. As shown in FIG. 4, random walks 432 are shown as solid lines between memory pool nodes, and other possible random walks are shown as dashed lines.

During an iteration of an acquisition operation, S of a slave node may bey408Tab _ DIST table 414 searches for the identification information (shown as "id" in fig. 4) closest to the target record (e.g., keyword) being searched. In various embodiments, given a random iteration i, the selection points to Tab _ DIST (S)yI) node S in 416z406, to the random pointer. In these embodiments, node S is pointed tozThe selected long-distance pointers of 406 must fall within the ranges given by id and k. E.g. node Sz406 must satisfy the following condition 422: id ≦ node SzThe identification information of 406 is less than or equal to k.

The selected node (e.g., node S in the example shown in FIG. 4) is then queried for the record associated with keyword kz406). If there is hit (k, i) on record 424 (e.g., the record is at node S)z406 Tab _ CLOSE table 418), node Sz406 may return a record to the memory pool node responsible for transmitting the query (node S in the example shown in fig. 4)y408). Furthermore, the node Sy408 may then be based at least in part on the slave node Sx404 to node Sy408 query, returning the record to node Sx404. For example, the record may be based on the same route used to locate the record. If there is no hit on the record, the fetch operation may continue and perform another iteration.

In various embodiments, signature messages are exchanged between peers in a DMP (e.g., memory pool nodes connected by a single hop in a network) to detect trusted connections between other memory pool nodes. For example, if node Sy408 and node Sx404 are peers (e.g., bound by trusted connection), and node x1Node x2And node x3Is S of a nodex404, then node Sy408 request a direct connection to node x1Node x2And node x3. Furthermore, in order toThe security level is improved, and the node x can be directly inquired1Node x2And node x3To verify its current connection. If the connection is confirmed, node Sy408 may determine the creation and node Sx404 (e.g., node x)2) The new connection of (2). In these embodiments, only if node x2A positive response to the connection request is followed by granting the new connection.

In other embodiments, the memory pool nodes may assign different weights β to trusted connections between particular memory pool nodesx,i. E.g. node Sx404 may assign weights to peer node x based at least in part on the following constraints1Node x2And node x3

In various embodiments, higher trust in a peer corresponds to a higher weight, e.g., at weight βx,z2/3 node Sy408 and weights βx,y1/3 node Sz406, consider node Sy408 is more trusted, wherein node Sy408 and node Sz406 are all the same memory pool node (e.g., node S)x404) Again, the weights may be used to adjust the probability of selecting individual hops in a random walk, or to initialize a new delivery connection, as described abovex,y1/3 and βx,z2/3, the slave node Sx404 to node Sz406 is to reach node Sy408 is twice the probability.

In these embodiments, each memory pool node is responsible for maintaining weight information associated with other memory pool nodes. In addition, with respect to node Sx404, node Sx404 may be determined by at least weighting (e.g., weighting β)x,y) Decreasing one value epsilon and increasing at least one other weight epsilon/(m)x-1) to update the peer weight value. Nodes, e.g. S, where a key falls into a nodey408Tab _ CLOSE, if the key word Sy408 fails to provide a valid key-value record, Sx404 may determine to reduce the weight associated with the peer.

Fig. 5 illustrates, in flow diagram form, an exemplary process 500 for performing an update operation to generate routing information for a memory pool node of a DMP, in accordance with an embodiment. Some or all of process 500 may take place under the control of one or more computer systems configured with executable instructions and/or other data and may be implemented as executable instructions that collectively execute on one or more processors. The executable instructions and/or other data may be stored on a non-transitory computer readable storage medium (e.g., a computer program permanently stored on a magnetic, optical, or flash media). The example process 500 may be performed by a memory pool node, such as one of the memory pool nodes 204 described in connection with fig. 2 of a DMP (e.g., the DMP206 described in connection with fig. 2).

That is, a memory pool node (e.g., one of memory pool nodes 204 described in conjunction with fig. 2) may perform exemplary process 500 to perform the update operations described in conjunction with fig. 5. Such a memory pool node may comprise any suitable computing device (e.g., a server of a data center, a client computing device, a plurality of computing devices in a distributed system of computing resource service providers, or any suitable electronic client device). Further, process 500 may be performed periodically or aperiodically in response to a triggering event (e.g., in response to a failure of a new or existing memory pool node joining the DMP) or for any other suitable reason for updating routing information.

The exemplary process 500 includes a series of operations in which the system performing the exemplary process 500 performs an update operation of the DHT protocol, as described above. In step 502 of the exemplary process 500, the system receives a set of trusted connections and a set of key-value records. A set of trusted connections and a set of key-value records may be received as input to an update operation. In various embodiments, this information may be omitted, or null (e.g., it may be a non-key-value record and/or a non-trusted connection stored by the memory pool node).

At step 504, the system initializes random walks for the set of key-value records, as described above, the set of random walks may be initialized for creating a set of distances distributed within the key space.

In step 508, the system is based at least in part on the slave Tab _ DIST (S)xI-1) to generate Tab _ ID (S)xI) th entry of i). In the absence of Tab _ ID (S)xI-1) (e.g., only one table in Tab _ DIST, e.g., with a new memory pool node), one to system (e.g., memory pool node) random key is selected from the Tab _ RND table location.

In step 510, the system bases on Tab _ ID (S)y) Table generation Tab _ DIST (S) including node address and identification informationxAnd i) recording. As described above in connection with fig. 3, the random walk is performed, and address information is obtained from the memory pool nodes that are contacted as a result of the random walk. The identification information and address information of the memory pool node is then stored as a record in Tab-DIST (S)xI) in the table.

In step 512, the system generates Tab _ CLOSE (S)xI) records including information obtained from the Tab _ RND tables of memory pool nodes contacted as a result of making random walks. As described above in connection with fig. 3, the memory pool node transmits the local (with respect to the memory pool node) record of the Tab _ RND that is lexicographically closest to the identifying information in a particular iteration of the update operation.

At step 514, the system determines if there are more iterations 514. For example, the process 500 may have a set number of iterations. In another example, the system performing process 500 determines the number of iterations at step 504 during initialization of the update operation. In yet another example, the system may continue the iterative process 500 until a certain amount of routing information is obtained. If the number of iterations has not been reached, the system returns to step 506. If the number of iterations is reached, the system proceeds to step 516 and completes the update operation.

Note that one or more operations performed in the exemplary process 500 shown in fig. 5 may be performed in various orders and combinations thereof, including in parallel. Further, one or more operations performed in the exemplary process 500 shown in FIG. 5 may be omitted.

Fig. 6 illustrates, in flow diagram form, an exemplary process 600 for performing a fetch operation to obtain data associated with a key for a memory pool node of a DMP network, in accordance with one embodiment. Some or all of process 600 may be performed under the control of one or more computer systems configured with executable instructions and/or other data and may be implemented as executable instructions executing collectively on one or more processors. The executable instructions and/or other data may be stored on a non-transitory computer readable storage medium (e.g., a computer program permanently stored on a magnetic, optical, or flash media). The example process 600 may be performed by a memory pool node, such as one of the memory pool nodes 204 described in connection with fig. 2 of a DMP (e.g., the DMP206 described in connection with fig. 2).

That is, a memory pool node (e.g., one of the memory pool nodes 204 described in connection with fig. 2) may perform the example process 600 to perform the fetch operation described in connection with fig. 6. Such a memory pool node may comprise any suitable computing device (e.g., a server of a data center, a client computing device, a plurality of computing devices in a distributed system of computing resource service providers, or any suitable electronic client device).

The example process 600 includes a series of operations in which the system performing the example process 600 performs the acquisition operation of the DHT protocol, as described above. At step 602 of the exemplary process 600, the system receives a request for a value associated with the key k. The request may be received from a system external to the DMP network or from a memory pool node that is a member of the DMP network.

In step 604, the system performs a lookup operation in the table Tab _ CLOSE to determine if the value associated with the key k is located in a record of the table. At step 606, if the record is in a table, the system returns the value associated with the key. However, if the record is not in the table Tab _ CLOSE, the system may proceed to step 608.

In step 608, the system determines the identification information (id) closest to the key k contained in the Tab _ DIST table. As described above, the distance may be determined based at least in part on the update operation described above in connection with FIG. 3.

In step 610, the system may select iteration i. The iteration may be chosen randomly or pseudo-randomly. Further, the iterations may represent particular tables generated during the iterations of the update operation, as described above.

In step 612, the system slave tables Tab _ DIST (S) based at least in part on z between id and kxI) a random pointer z is selected. As described above, the pointer may be an address to another memory pool node of the DMP network implementing the DHT. In this manner, during successive iterations of a fetch operation, the system may move in the key space to a particular memory pool node associated with a key containing a value in the table Tab _ CLOSE of the particular memory pool.

In step 614, the system queries the memory pool node at pointer z. In various embodiments, the query includes the keyword k. Further, the query may include additional information, such as an address associated with the requestor. In various embodiments, the additional information enables the memory pool node associated with key k and containing the value to transmit the value directly to the address associated with the requestor. This is in contrast to transmitting responses to memory pool nodes responsible for transmitting queries, and because the information is returned through the DMP network along the same network path, additional network traffic can be avoided.

The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the claims that follow.

Note that one or more operations performed in the exemplary process 600 shown in fig. 6 may be performed in various orders and combinations thereof, including in parallel. Further, one or more operations performed in the exemplary process 600 illustrated in FIG. 6 may be omitted. In various embodiments, there is a maximum number of iterations that the process 600 will take, and if no value is returned, the process 600 terminates.

FIG. 7 schematically illustrates an exemplary computing device 700 in which various embodiments of the invention may be implemented. Fig. 7 illustrates a simplified diagram of an exemplary computing device 700 that may be used to practice at least one embodiment of the present disclosure. In various embodiments, the exemplary computing arrangement 700 may be used to implement any of the systems or methods shown and described herein. For example, exemplary computing device 700 may be configured to function as a data server, a network server, a portable computing device, a personal computer, or any electronic computing device. As shown in fig. 7, exemplary computing device 700 may include one or more processors 702, processor 702 may be configured to communicate and be operably coupled to a plurality of peripheral subsystems via a bus subsystem 704. The processor 702 may be utilized to implement the methods described herein. These peripheral subsystems may include a storage subsystem 706 (including a memory subsystem 708 and a file storage subsystem 710), one or more user interface input devices 712, one or more user interface output devices 714, and a network interface subsystem 716. Such a storage subsystem 706 may be used for temporary or long-term storage of information associated with blockchain transactions or operations described in the present disclosure.

Bus subsystem 704 may provide a mechanism for enabling the various components and subsystems of exemplary computing device 700 to communicate with one another as desired. Although bus subsystem 704 is shown schematically as a single bus, alternative embodiments of the bus subsystem may utilize multiple buses. The network interface subsystem 716 may provide an interface to other computing devices and networks. The network interface subsystem 716 may serve as an interface for receiving data from other systems and transmitting data from the exemplary computing device 700 to other systems. For example, the network interface subsystem 716 may enable a user to connect the device to a wireless network or some other network, such as the networks described herein. The bus subsystem 704 may be used to communicate data associated with blockchain transactions or operations described in this disclosure to one or more processors 702 and/or other entities external to the system via the network interface subsystem 716.

The user interface input devices 712 may include one or more user input devices such as a keyboard, a pointing device (e.g., an integrated mouse, trackball, touchpad, or tablet), a scanner, a bar code scanner, a touch screen incorporated into a display, an audio input device (e.g., a voice recognition system, microphone), and other types of input devices. In general, use of the term "input device" is intended to include all possible types of devices and mechanisms for inputting information into computing device 700. The one or more user interface output devices 714 may include a display subsystem, a printer, or a non-visual display, such as an audio output device, etc. The display subsystem may be a Cathode Ray Tube (CRT), a flat panel device such as a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, or a projection or other display device. In general, use of the term "output device" is intended to include all possible types of devices and mechanisms for outputting information from computing device 700. The one or more output devices 714 may be used, for example, to present a user interface to a user that facilitates user interaction with the applications and variations thereof that perform the processes described herein, as such interaction may be appropriate.

The storage subsystem 706 may provide a computer-readable storage medium for storing the basic programming and data constructs that may provide the functionality of at least one embodiment of the present disclosure. Applications (programs, code modules, instructions) when executed by one or more processors may provide the functionality of one or more embodiments of the present disclosure and may be stored in the storage subsystem 706. These application program modules or instructions may be executed by one or more processors 702. The storage subsystem 706 may additionally provide a repository for storing data used in accordance with the present disclosure. The storage subsystem 706 may include a memory subsystem 708 and a file/disk storage subsystem 710.

Memory subsystem 708 may include a number of memories, including a main Random Access Memory (RAM)718 for storing instructions and data during program execution and a Read Only Memory (ROM)720 that may store fixed instructions. File storage subsystem 710 may provide non-transitory permanent (non-volatile) storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a compact disk read only memory (CD-ROM) drive, an optical disk drive, removable media cartridges, and other like storage media.

Exemplary computing device 700 may include at least one local clock 724. The local clock 724 may be a counter that represents the number of ticks that occurred since a particular start date, and may be located entirely within the exemplary computing device 700. The clock 724 may be used to synchronize data transmissions in the processors in the exemplary computing device 700 and all subsystems included therein with a particular clock pulse, and may be used to coordinate synchronization operations between the exemplary computing device 700 and other systems in the data center. In one embodiment, the local clock 724 is an atomic clock. In another embodiment, the local clock is a programmable interval timer.

Exemplary computing device 700 may be of various types, including a portable computer device, a tablet computer, a workstation, or any other device described herein. Further, the exemplary computing device 700 may include another device that may be connected to the exemplary computing device 700 through one or more ports 726 (e.g., USB, headphone jack, Lightning connector, etc.). Devices that may be connected to the exemplary computing device 700 may include a plurality of ports configured to accept fiber optic connectors. Thus, devices that may be connected to the exemplary computing device 700 may be configured to convert optical signals to electrical signals that may be transmitted for processing through a port connecting the device to the exemplary computing device 700. Because of the ever-changing nature of computers and networks, the description of the exemplary computing device 700 depicted in FIG. 7 is intended only to illustrate an embodiment of the device, as a specific example. Many other configurations are possible with more or fewer components than the system shown in fig. 7.

One or more embodiments of the invention may be described as a method and system for providing an improved blockchain implementation. By enabling faster read and write operations, it can increase the speed of operations over blockchain networks. Furthermore, it may provide higher security for blockchain implemented systems, as it may prevent attacks or malicious activities, such as routing and storage attacks in distributed memory pools, and the like. Thus, the present invention may provide a more secure blockchain solution. In addition, it may provide a mechanism to ensure that a consensus is reached within the network regarding the effectiveness of blockchain transactions stored in distributed storage pools, thereby enhancing the overall performance, reliability and security of the network. It also provides an improved architecture, improved memory resources and capabilities for the blockchain system. Other advantages of the present invention may also be provided.

It should be noted that in the context of describing the disclosed embodiments, unless otherwise indicated, use of the expression (e.g., transmission of data, computation of computations, etc.) in relation to executable instructions (also known as code, applications, agents, etc.) that perform operations not normally performed by an "instruction" indicates that the instruction is being executed by a machine, thereby causing the machine to perform the specified operation.

It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" and "comprises", and the like, does not exclude the presence of elements or steps other than those listed in any claim or the specification as a whole. In the present specification, "including" means "including" or "consisting of …". The singular reference of an element does not exclude the plural reference of such elements and vice-versa. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

24页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:使用地理电话号码的连接

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类