System and method for monitoring a distributed payment network

文档序号:1409738 发布日期:2020-03-06 浏览:7次 中文

阅读说明:本技术 用于监视分布式支付网络的系统和方法 (System and method for monitoring a distributed payment network ) 是由 S·N·普罗克特 D·E·莫雷 于 2018-03-29 设计创作,主要内容包括:提供了用于向支付网络中的边缘设备分发一个或多个服务以及由支付网络的节点的网络监视与支付网络相关联的分布式处理设备的系统和方法。一个示例性方法包括在节点的网络的第一节点处接收来自客户端的对数据的请求并且由第一节点将第二节点识别为包括该数据。该方法还包括将对数据的请求转发到第二节点,并且在从第二节点接收到数据后,由第一节点响应于请求而将数据提供给客户端,由此,即使当数据包括在第二节点中并且在第一节点处接收到请求时,节点的网络也完成对数据的请求。(Systems and methods are provided for distributing one or more services to edge devices in a payment network and monitoring, by a network of nodes of the payment network, distributed processing devices associated with the payment network. One example method includes receiving, at a first node of a network of nodes, a request for data from a client and identifying, by the first node, a second node as including the data. The method also includes forwarding the request for data to the second node and, upon receiving the data from the second node, providing, by the first node, the data to the client in response to the request, whereby the network of nodes completes the request for data even when the data is included in the second node and the request is received at the first node.)

1. A system for monitoring a distributed payment network, the system comprising a payment network comprising:

a first set of processing devices and a second set of processing devices, each coupled to an organization and configured to coordinate communications with the organization;

a first node coupled to a first set of processing devices; and

a second node coupled to the second set of processing devices and configured to cooperate with the first node to form a mesh network;

wherein the first node comprises a first data structure and is configured to:

receiving, from a first set of processing devices, raw data associated with communications between the first set of processing devices and an organization; and

storing raw data received from a first set of processing devices in a first data structure included in a first node; and

wherein the second node comprises a second data structure and is configured to:

receiving, from a second set of processing devices, raw data associated with communications between the second set of processing devices and the organization;

storing the raw data received from the second set of processing devices in a second data structure included in the second node;

compiling the processed data based on the raw data stored in the first data structure and the second data structure; and

the processed data is provided to the requestor.

2. The system of claim 1, wherein one processing device of the first set of processing devices is configured to send raw data to the first node based on a schedule.

3. The system of claim 1, wherein the payment network comprises a third node and a fourth node; and

wherein the first node is further configured to:

identifying each of the second node and the third node; and

information related to the identified second and third nodes is provided to the fourth node.

4. The system of claim 1, wherein the first node is further configured to:

receiving a request for data associated with a processing device included in the second set of processing devices;

identifying a second node based on the request;

forwarding the request to a second node; and

after providing the requested data from the second node, responding to the request with the requested data.

5. The system of claim 4, wherein the second node is further configured to receive the request from the first node and respond with the requested original data.

6. The system of claim 1, wherein the payment network further comprises:

a third set of processing devices, each processing device of the third set of processing devices coupled to the organization and configured to coordinate communications with the organization; and

a third node coupled to a third set of processing devices and to the first node but not to the second node, wherein the third node comprises a third data structure and is configured to:

receiving, from a third set of processing devices, raw data associated with communications between the third set of processing devices and the organization;

storing the raw data in a third data structure; and

the second node is enabled to access the stored data via the first node.

7. The system of claim 1, wherein the second node comprises a transformation worker configured to compile processed data based on the raw data stored in the first data structure and the second data structure, the processed data comprising a total transaction amount per second for at least one of the processing devices included in the first set of processing devices.

8. The system of claim 1, wherein the processed data comprises a transaction total for a plurality of processing devices in the payment network.

9. The system of claim 8, wherein the second node comprises an application programming interface configured to send the processed data to the application in response to a request from the application.

10. The system of claim 1, wherein the second node is further configured to cooperate with the first node and a plurality of other nodes to form a mesh network; and

wherein the mesh network comprises a hierarchical mesh network.

11. The system of claim 1, wherein the second node is further configured to:

receiving an operation message from one processing device of the second set of processing devices; and

the operation message is sent to an event management service.

12. The system of claim 1, further comprising a unified logical data structure comprising a first data structure and a second data structure.

13. The system of claim 1, wherein the first node is further configured to send an advisory message to at least one processing device of the first set of processing devices prior to shutdown, the advisory message identifying the second node whereby the at least one processing device is capable of providing the original data to the second node when the first node is shutdown.

14. A computer-implemented method for monitoring distributed processing devices, the method comprising:

identifying, by the first node, each other node in the distributed network of nodes;

receiving and storing, by a first node, raw data from a plurality of processing devices;

detecting a shutdown condition;

identifying, by the first node, one of a plurality of available replacement nodes for each of the processing devices;

sending, by the first node, a proposal message to each of the processing devices, wherein each message includes an identified replacement node for the processing device.

15. The computer-implemented method of claim 14, wherein identifying, by the first node, comprises: identifying an alternate node for each of the processing devices based on load balancing among the plurality of available alternate nodes.

16. A computer-implemented method for monitoring distributed processing devices by a network of nodes, the network of nodes including a first node and a second node, the method comprising:

receiving, at a first node, a request for data from a client;

identifying, by the first node, the second node as including the data;

forwarding the request for data to the second node; and

in response to the request, the data is provided to the client after receiving the data from the second node, such that the network of nodes completes the request for the data even when the data is included in the second node and the request is received at the first node.

17. The computer-implemented method of claim 16, wherein the network of nodes is a mesh network of nodes.

18. The computer-implemented method of claim 17, further comprising: data received from the second node is processed by the first node before the data is provided to the client.

19. The computer-implemented method of claim 18, wherein processing data comprises: at least one summary of the data is generated.

20. The computer-implemented method of claim 17, wherein the network of nodes comprises a third node; and

wherein the method further comprises:

transmitting, by the first node, data received from the second node to the third node, whereby a transformation worker in the third node processes the data; and

receiving, by the first node, the processed data from the third node; and

wherein providing the data to the client comprises providing the processed data to the client.

Technical Field

The present disclosure relates generally to systems and methods for monitoring a distributed payment network, and more particularly to systems and methods for monitoring transaction statistics within a payment network through a distributed network of nodes.

Background

This section provides background information related to the present disclosure, but not necessarily prior art.

Payment accounts are often used to fund transactions to purchase products (e.g., goods and/or services) from merchants and the like. To facilitate such transactions, the payment network supports communication between an acquiring bank associated with the merchant and an issuing bank associated with a particular payment account funding the transaction to provide authorization, clearing, and settlement of the transaction. The payment network typically includes an interface processor (e.g.,

Figure BDA0002272335340000011

interface Processors (MIPs), etc.) that are often deployed with the acquiring and issuing banks to receive and/or provide transaction-related messages (e.g., authorization requests, authorization replies, etc.) to the acquiring and issuing banks.

In conjunction with the above transactions and interactions, it is also known for payment networks to provide a centralized server to monitor and/or analyze various interface processors and/or other edge devices within the payment network to facilitate communications between the acquiring and issuing banks.

Drawings

The drawings described herein are for illustrative purposes only of selected embodiments and not all possible implementations, and are not intended to limit the scope of the present disclosure.

FIG. 1 is a block diagram of an exemplary system of the present disclosure that is suitable for monitoring interface processors included in a payment network through a distributed network of nodes;

FIG. 2 is a block diagram of a computing device that may be used in the exemplary system of FIG. 1; and

FIG. 3 is an exemplary method, which may be implemented in conjunction with the system of FIG. 1, for monitoring and/or collecting data related to a distributed interface processor forming part of a payment network.

Corresponding reference characters indicate corresponding parts throughout the several views of the drawings.

Detailed Description

Exemplary embodiments will now be described more fully with reference to the accompanying drawings. The description and specific examples included herein are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.

The payment network includes interface processors that are geographically distributed and often deployed with banking institutions (also typically geographically distributed) associated with the payment network. Interface processors sometimes encounter errors or problems that cause the interface processor to fail and/or continue to operate in a weakened manner. By collecting and analyzing transaction statistics associated with the interface processor or with other edge devices of the payment network, the payment network can mitigate (e.g., eliminate, reduce, etc.) such problems or errors associated with the interface processor and/or other edge devices. Uniquely, the systems and methods herein provide a distributed network of nodes that form a mesh network over a payment network (or a segment thereof). Each node communicates with one or more processing devices, which are often deployed at customer sites (e.g., at an issuer, at an acquirer, etc.) to collect data from the processing devices. Data collection may be scheduled, or may be on-demand (e.g., via an Application Programming Interface (API), etc.). In connection with this, each node may rely on a shutdown process to transfer the connected processing device to other nodes, and may also provide data transformations for outputting the summary and/or processed data to a desired user and/or application. As such, the node network provides a distributed monitoring system for one or more processing devices included in the payment network, which is generally scalable to accommodate changes in the payment network.

Fig. 1 illustrates an exemplary system 100 in which one or more aspects of the present disclosure may be implemented. While the system 100 is presented in one arrangement, other embodiments may include portions (or other portions) of the system 100 arranged in other ways, depending on, for example, the distribution of processing devices, the distribution of nodes, services associated with payment accounts and payment account transactions, and so forth.

As shown in fig. 1, the system 100 generally includes a payment network 102, three acquirers 104a-c (e.g., acquiring banks, etc.), and two issuers 106a-b (e.g., issuing banks, etc.), each coupled to (and in communication with) the payment network 102. Generally, the acquirers 104a-c and issuers 106a-b are banking institutions that provide accounts to consumers, merchants, and the like. The account may include, for example, a credit account, a savings account, a prepaid account, a debit account, a checking account. The account may then be used to transfer funds therebetween and/or to fund a transaction to purchase a product from the merchant, as described in more detail below.

In this exemplary embodiment, the payment network 102 includes a plurality of processing devices 108 a-e. Here, in particular, the processing devices 108a-e include, for example

Figure BDA0002272335340000031

Interface Processors (MIPs), etc. Each of the processing devices 108a-e is deployed at an "edge" of the payment network 102 (e.g., and thus each is also referred to as an edge device, etc.) because each processing device is connected to and/or provides communication with a different one of the acquirers 104a-c and/or the issuers 106 a-b. In particular, each of the processing devices 108a-e is associated with one of the acquirers 104a-c or issuers 106 a-b. Such that communications with particular ones of the acquirers 104a-c or issuers 106a-b (e.g., communications related to authorization of payment transactions) are coordinated therethrough. As indicated in FIG. 1, in the illustrated embodiment, the acquirers 104a-c and the issuers 106a-b are geographically distributed over oneOne or more regions and/or countries, etc. As such, the associated processing devices 108a-bc are also geographically distributed. Thus, even as part of the payment network 102, the processing devices 108a-e may be deployed across different regions or countries or continents. In this manner, operations at the processing devices 108a-e may occur within one region or country and remain within that country or region (at least with respect to the specific data processed thereby) to comply with certain restrictions with respect to data that may ultimately be transmitted and/or provided to other portions of the payment network 102 (or other portions of the system 100) in one or more different regions and/or countries.

For example, in one exemplary purchase transaction, a consumer initiates a transaction with a merchant (not shown) by presenting a corresponding payment device to the merchant (in area a (not shown)), which transaction will be funded by a payment account issued by the issuer 106B (in area B (not shown)). In turn, the merchant reads the payment device and provides an authorization request (broadly, a message) to the acquirer 104a (i.e., the provider that is the merchant's bank account) that is also located in area a. The acquirer 104a sends the authorization request to the payment network 102 via the interface processing device 108 a. In response, the issuer 106b receives the authorization request from the payment network 102 via the interface processing device 108e and determines whether the transaction is approved or denied based on various factors, including, for example, the balance of the payment account, etc. When approved, the issuer 106b provides an authorization reply (broadly, a message) back to the acquirer 104a and the merchant indicating approval through the interface processing device 108e and the interface processing device 108 a. The merchant can then proceed with the transaction with the consumer. Alternatively, when the transaction is denied, the issuer 106a provides an authorization reply back to the acquirer 104a and the merchant over the payment network 102 that the transaction is denied. The merchant may then terminate the transaction or seek other forms of funds for the transaction.

The above is described with reference to inter-area transactions between area a and area B. And as can be appreciated, authorizing a transaction involves the exchange of data between two zones (zones a and B). In connection with this, in some embodiments, the payment network 102 may be subject to regulatory constraints that require that transactions (as originating from region B) be processed, for example, entirely within region B, or that limit the locations where certain data related to the transaction may be stored. As such, the system 100 generally includes the ability to partition transaction data and/or services (e.g., authorization, etc.) related to a transaction at the processing devices 108a, 108e as needed (i.e., to limit its replication between sites) to meet given regulatory constraints. For example, one or more controls on how data is allowed to be queried and/or exchanged from system 100 may be imposed at a particular processing device and/or node, etc. that receives a request related to a transaction.

In addition to the inter-region transactions described above, it should be appreciated that the present disclosure is not limited to such transactions, or, in this regard, is not limited to inter-region data limitations. Specifically, in another exemplary transaction, for example, both the acquirer 104a and the issuer 106a may be deployed in the same region (or deployed such that there are no inter-region restrictions) (e.g., both in region a, etc.). In this example, the authorization message may be sent by the payment network 102, such as, for example, via the interface processing device 108c to the issuer 106a associated with the consumer's account. Therefore, the flow of the response coincides with the above-described flow.

Although only three acquirers 104a-c, two issuers 106a-b, and five processing devices 108a-e are shown in fig. 1 (for ease of illustration), it should be appreciated that the system 100 may include any desired number of such entities or devices within the scope of the present disclosure. In general, the system 100 will often include more of each of these entities and/or devices.

Fig. 2 illustrates an exemplary computing device 200 that may be used in system 100. Computing device 200 may include, for example, one or more servers, workstations, computers, laptops, point of sale (POS) devices, and the like. Further, computing device 200 may comprise a single computing device, or it may comprise multiple computing devices located near or distributed within a geographic area, so long as the computing devices are configured to operate as described herein. In the system 100, each of the acquirers 104a-c, the issuers 106a-b, and the processing devices 108a-e may comprise, or be implemented in, a computing device consistent with the computing device 200. In connection therewith, each is then coupled to and in communication with one or more networks interconnecting these portions of system 100. However, as described below, system 100 should not be considered limited to computing device 200, as different computing devices and/or arrangements of computing devices may be used. Moreover, different components and/or arrangements of components may be used in other computing devices.

Referring to fig. 2, an exemplary computing device 200 includes a processor 202 and a memory 204 coupled to (and in communication with) the processor 202. Processor 202 may include one or more processing units (e.g., in a multi-core configuration, etc.). For example, the processor 202 may include, but is not limited to, a Central Processing Unit (CPU), a microcontroller, a Reduced Instruction Set Computer (RISC) processor, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a programmable gate array (e.g., Field Programmable Gate Array (FPGA), etc.), a system on a chip (SOC), and/or any other circuit or processor capable of performing the operations described herein.

Memory 204 is one or more devices that allow data, instructions, etc. to be stored therein and retrieved therefrom, as described herein. Memory 204 may include one or more computer-readable storage media such as, but not limited to, Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), Read Only Memory (ROM), Erasable Programmable Read Only Memory (EPROM), solid state devices, flash drives, CD-ROMs, thumb drives, floppy disks, tape, hard disks, and/or any other type of volatile or non-volatile physical or tangible computer-readable medium. Memory 204 may be configured to store, but is not limited to, transaction data, operational data, statistical data, analytical data, performance data, and/or other types of data (and/or data structures) suitable for use herein. Further, in various embodiments, computer-executable instructions (i.e., software instructions) may be stored in memory 204 for execution by processor 202 to cause processor 202 to perform one or more operations described herein, such that memory 204 is a physical, tangible, and non-transitory computer-readable storage medium. Such instructions often improve the efficiency and/or performance of the processor 202 performing one or more of the various operations herein. Further, one or more load files may be stored in memory 204 that include a hardware description that, when loaded into processor 202 (or another processor), causes the structure of processor 202 to be consistent with the description herein (e.g., description of a gate array arrangement/configuration, etc.).

In addition, the illustrated computing device 200 also includes a network interface 206, the network interface 206 being coupled to (and in communication with) the processor 202 (and/or the memory 204) and configured to provide communication and/or be coupled to one or more networks. The one or more networks may include, but are not limited to, one or more of a Local Area Network (LAN), a Wide Area Network (WAN) (e.g., the internet, etc.), a mobile network, a virtual network, and/or another suitable public and/or private network capable of supporting communication between the two or more portions shown in fig. 1, or any combination thereof. Consistently, the network interface 206 can include, but is not limited to, a wired network adapter, a wireless network adapter, a mobile network adapter, or other device capable of communicating with one or more different networks (e.g., such as the networks included in the system 100). In some demonstrative embodiments, computing device 200 includes a processor 202 and one or more network interfaces 206 incorporated into or with processor 202.

Referring again to FIG. 1, the payment network 102 also includes three nodes 110a-c configured by executable instructions to monitor the processing devices 108a-e included in the payment network 102 and operate as described herein. In this regard, nodes 110a-c may each be considered a computing device consistent with computing device 200. Also, as shown in FIG. 1, each of nodes 110a-c (similar to each of processing devices 108 a-e) is generally distributed throughout payment network 102 for deployment in one or more particular regions and/or countries or the like in which one or more of processing devices 108a-e are deployed. As such, communication between processing devices 108a-e and nodes 110a-c may be allowed (e.g., under regional data restrictions, etc.) and/or may be more efficient. That is, although processing devices 108a-e may be distributed in different regions and/or countries, etc., nodes 110 will not necessarily be distributed to those same regions and/or countries, etc. Further, in general, payment network 102 will include multiple processing devices 108a-e for each of nodes 110 a-c. Additionally, although only three nodes 110a-c are illustrated in fig. 1 (for ease of illustration), it should be appreciated that the system 100 may include any desired number of nodes within the scope of the present disclosure. Generally, system 100 will often include more such nodes (e.g., including a fourth node, a fifth node, a sixth node, etc.).

In the illustrated embodiment, node 110b includes a monitoring server 112, a transformation worker 114, an API 116, and a data structure 118. It should be understood that each of the other nodes 110a, 110c may also include the same content (although such similarity is not required in all embodiments).

In the system 100, each of the nodes 110a-c is configured to initially detect the other nodes 110a-c within the payment network 102. In particular, for example, the monitoring server 112 of node 110b is configured to determine the topology and/or arrangement of the other nodes 110a, 110c by querying or otherwise querying the other nodes 110a, 110c for data stored in the respective nodes, by reading a list of nodes 110a, 110c from a data structure associated therewith, and/or by listening for inbound connections from the nodes 110a, 110 c. Based on this, node 110a is for example configured to connect to each of the other nodes 110b-c and exchange information about other known ones of the nodes 110b-c in the payment network 102. In this manner, the monitoring server 112 of node 110b is configured to be included in a mesh network defined by nodes 110a-c, where each of the nodes 110a-c communicates with each of the other nodes 110 a-c. It should be appreciated that in some embodiments, a mesh network may comprise a hierarchical topology, for example, depending on the number and/or distribution of nodes contained therein. Additionally, in at least one embodiment, for example, node 110a is configured as a "parent" node to node 110b, whereby communications from node 110b to one or more other nodes (e.g., node 110c, etc.) are coordinated through node 110a (as node 110b is only connected to node 110 a).

By way of example, the executable instructions may be defined by a code segment provided below that, when executed by node 110b (and other nodes 110a, 110c), causes nodes 110a-c to form a mesh network for nodes 110 a-c. In general, for example, when it relates to the following code segments, the term "peer" is generally used with reference to other nodes 110a, 110c connected to node 110b, the term "processor" is generally used with reference to a processing device (e.g., MIP, etc.), and the term "client" is generally used with reference to another user or application that is requesting data from node 110b regarding one or more of the processing devices.

Figure BDA0002272335340000081

Figure BDA0002272335340000091

Figure BDA0002272335340000101

The monitoring server 112 of node 110b is also configured to listen for inbound network connections from the processing devices 108a-e and maintain a list of the processing devices 108a-e in the payment network 102. In particular, for example, the monitoring server 112 is configured to monitor connection/disconnection activity of the processing devices 108a-e and to exchange advice messages with other ones of the nodes 110a, 110c to indicate to the other nodes the locations of all of the processing devices 108a-e in the payment network 102.

Further, the monitoring server 112 of node 110b is configured to communicate (bi-directionally) with one or more of the processing devices 108a-e in two primary exchanges: scheduling exchanges and on-demand exchanges. In a first scheduled exchange, the processing devices 108a-e are configured to provide data to the monitoring server 112 according to a scheduled time, such as, for example, a total number of transactions in general (e.g., credit transactions, debit transactions, etc.), or according to a defined interval (e.g., every two minutes, five minutes, one hour, etc.). The scheduled time may be every two minutes, every hour, every day, at midnight (or some other time), or even/odd days, or weekly, etc. In response, the monitoring server 112 is configured to receive the data, process the data (as needed or desired), and store the data to a data structure 118 included in the node 110 b. It should be appreciated that in other embodiments, a variety of different types of data, established intervals, and/or schedules may be employed to capture data from the processing devices 108a-e, possibly depending on, for example, the particular capabilities of the processing devices 108a-e to be measured and/or reviewed.

In conjunction with on-demand switching, the monitoring server 112 of node 110b is configured to respond to one or more different data needs. The requirements may be imposed by one or more of the processing devices 108a-e, or may be received from an entity external to the payment network 102 (e.g., a user program associated with the payment network 102 or the issuer 106a, etc.). In particular, when a problem is occurring with one of the processing devices 108a-e and/or an abnormal condition is otherwise identified, the processing device may be configured to send an operation message (e.g., a high priority operation message, etc.) to the node 110 b. In response, the monitoring server 112 is configured to receive the operation message, store the operation message in the data structure 118, and provide the operation message to an event management service (e.g., internal or external to the payment network 102, etc.). Also, when the data demand originates from a user or program and involves a specific data request from one or more of the processing devices 108a-e, the monitoring server 112 is configured to identify a particular one of the nodes 110a-c (and its associated monitoring server) that is connected to the problematic one or more of the processing devices 108a-e based on the topology of the mesh network understood by the monitoring server 112. Once identified, the monitoring server 112 is configured to provide the request to the appropriate one of the nodes 110a-c (and its monitoring server), which is then configured to request the appropriate data from one or more of the processing devices 108 a-e. In turn, when a response is received from one or more of the processing devices 108a-e, the receiving one of the nodes 110a-c (and its corresponding monitoring server) is configured to relay the data in the opposite direction, e.g., to the monitoring server 112 of node 110b as needed, and ultimately back to the user or program that originally requested the data. In this manner, an on-demand request for data may be provided by the requesting user or program to any known monitoring server 112 in the payment network 102 (the appropriate one of the nodes 110a-c), but still be valid for retrieving the required data. Moreover, neither the user, the program requesting the data, nor the transaction processing device 108a-e to which the request is directed need to understand the topology of the nodes 110 a-c.

By way of example, the executable instructions may be defined by code segments provided that, when executed by node 110b (and/or other nodes 110a, 110c), cause node 110b to communicate one or more client requests (e.g., requests from users or applications, etc.) combined with data requirements to appropriate ones of the other nodes 110a, 110 c.

Figure BDA0002272335340000111

Figure BDA0002272335340000121

With continued reference to fig. 1, the nodes 110a-c of the payment network 102 are further configured to: when a particular one of the nodes 11a-c associated with a particular one of the processing devices 108a-e is down, an alternate connection is provided to the processing device 108 a-e. In doing so, the particular one of the nodes 110a-c that is shutting down notifies the devices 108a-e of the shutdown and provides address information for another monitoring server of one of the nodes 110 a-c. In particular, for example, when the monitoring server 112 of node 110b is powered down, it is configured to send an advice message to the transaction processing device 108c connected thereto (so that the processing device 108c may then be configured to connect with one of the other nodes 110a, 110 c). Similarly, in this example, when node 110a is powered off, the monitoring server of node 110a is configured to send advisory messages to processing devices 108a-b (such that processing devices 108a-b are then configured to connect to node 110c and node 110b, respectively, as indicated by the dotted lines in fig. 1). And, when node 110c is shutting down, the monitoring server of node 110c is configured to send an advisory message to processing device 108d-e (such that processing device 108d-e may then be configured to connect to one of the other nodes 110 a-b). In this manner, the various processing devices 108a-e are provided with the necessary information to quickly reconnect to another one of the nodes 110a-c, even if that node is not directly known to the processing devices 108 a-e. Further, the monitoring server of a respective node 110a-c may be configured to utilize heuristics to decide which of the other nodes 110a-c to suggest to each of its connected transaction processing devices 108a-c prior to shutdown. As such, the monitoring server may be configured to attempt to evenly or otherwise distribute the current load of their corresponding ones of the processing devices 108a-e among a plurality of other nodes 110a-c (within the same area, or not within the same area), or they may be configured to select another one of the nodes 110a-c that is proximate, attached, or efficient for each of its given processing devices 108a-e using the topology of the mesh network of nodes 110 a-c.

In the exemplary embodiment, data structure 118 is shown as being included in node 110 b. It should be understood that the data structure 118 is configured to cooperate with data structures included in other nodes (e.g., node 110a and node 110c, etc.) to provide distributed storage within the payment network 102 as desired and/or appropriate. In particular, for example, the data structures of the various nodes 110a-c as operated and/or used by the monitoring servers of the nodes 110a-c may form, for example, Apache CassandraTMData structures, thereby providing a unified logical storage (or data structure) as used herein. Alternatively, in other embodiments, the data structures of nodes 110a-c may form other arrangements (e.g., MongoDB, Riak, and OrientDB, etc.) and/or may include distributed NoSQL key-value data structures.

Additionally, in system 100, the transform workers of nodes 110a-c (e.g., transform worker 114 of node 110b, etc.) are configured to aggregate, transform, and/or otherwise process raw data received from respective ones of processing devices 108a-e and store the data in corresponding data structures (e.g., data structure 118 of node 110b, etc.). The transformation worker is further configured to perform routine maintenance of the data structure. For example, the transformation worker may be configured to aggregate data for multiple processing devices 108a-e (across multiple nodes 110a-c) and calculate the instantaneous total transaction amount per second (TPS) being processed on the payment network 102. Moreover, in another example, the transformation worker may be configured to utilize current and historical data to generate future predictions of activity for the various processing devices 108a-e that may be used to plan capacity goals and/or identify deviations from specifications indicative of problems in the payment network 102 (or a particular one of the processing devices 108 a-e). In yet another example, the transformation worker may be configured to generate predictions about all of the processing devices 108a-e or about a group of the processing devices 108a-e, and/or to prune data from its corresponding data structure once a retention interval (e.g., one day, one week, etc.) defined for that data has passed.

Further in system 100, the monitoring servers of nodes 110a-c are configured to initiate and/or manage transformation workers and communicate with other monitoring servers to ensure that, at least in this embodiment, only one of the transformation workers is active (in payment network 102 or a subset thereof) at a time. In this manner, only one transformation worker is processing data contained in the various data structures (across all nodes 110a-c) to reduce and/or eliminate multiple access problems (e.g., data collisions, etc.) and/or to output conflicting processed data from the transformation workers. In this regard, a given transformation worker may be active, as opposed to other transformation workers, because it is included, for example, at one of the nodes 110a-c having the fewest connected processing devices 108a-e, at one of the nodes 110a-c having the lowest latency path to the data structure, at one of the nodes 110a-c that is random, at one of the nodes 110a-c that is the longest in runtime, and so forth.

It should be appreciated that in at least one embodiment, nodes 110a-c may include a total of one transformation worker, thereby eliminating the selection of a transformation worker and routing all requests of the type described above to nodes having transformation workers.

Also, in the exemplary embodiment, the APIs of nodes 110a-c (e.g., API 116 at node 110b, etc.) are configured to provide users and/or user programs or applications (broadly, clients) with access to raw and/or aggregated (or processed) data. Each API may include, but is not limited to, a Web services API such as, for example, representational state transfer (REST) or Simple Object Access Protocol (SOAP), etc., or a data structure access mechanism (e.g., Java database connectivity (JDBC), open database connectivity (ODBC), etc.) or scheduled file transfers, etc. In one example user application, a network-based dashboard tool accessible to a user at a computing device may be configured to query the entirety of the payment network 102 for summary data every 30 seconds using a REST API. However, it should be appreciated that other applications or tools may be employed to invoke the API to retrieve data from the nodes 110a-c, as desired.

Fig. 3 illustrates an exemplary method 300 for monitoring a distributed payment network. Method 300 is described with reference to system 100, and in particular processing devices 108a-e and nodes 110a-c, and also with reference to computing device 200. While described in this manner, it should be appreciated that the methods herein are not limited to the system 100 and/or the computing device 200. Likewise, it should also be appreciated that the systems and methods herein should not be construed as limited to the method 300.

Consistent with the above, each of the nodes 110a-c of the payment network 102 in the system 100 continuously receives and stores data in its corresponding data structure (e.g., in the data structure 118 for node 110b, etc.). From time to time, the request may be provided to one or more of the nodes 110a-c by a user and/or application included in or associated with the payment network 102. In particular, in the example method 300 shown in fig. 3, a request is received by the node 110b from an application at 302. When the request is provided from within the payment network 102 (i.e., when received from an application within the payment network 102), the request may be received via the monitoring server 112 or may be received through the API 116. By way of example, an application within payment network 102 may monitor the status of processing devices 108a-e and request data from one or more of nodes 110 a-c. Alternatively, the node 110b or nodes 110a-c may collectively use data received unsolicited by the monitoring server(s) therein, which is then accumulated and transformed to provide global statistics, a health dashboard, and/or other representation of the transformed data in one or more user-interactive manners, to possibly provide monitoring of the payment network 102, and in particular the processing devices 108a-e, etc., at a given time.

In response to the request in method 300, node 110b determines at 304 whether the request relates to data specific to a processing device (e.g., processing device 108c, etc.), or whether the data is generic to multiple ones of processing devices 108a-e (and specifically to processing devices 108a-e associated with different nodes 110a-c), and so forth. When the data request is specific to a particular processing device (e.g., processing device 108c, etc.), node 110b identifies the node(s) of nodes 110a-c associated with the given processing device to which the request is directed at 306. In this example, if the processing device is processing device 108c, then processing device 108c is associated with node 110b (i.e., the current or local node), and thus, node 110b identifies itself. When the local node 110b identifies itself, the node 110b compiles the data or retrieves raw data from the data structure 118 as needed and sends the requested data to the requesting user or application at 308.

Conversely, when node 110b identifies at 306 that the processing device to which the request is directed is associated with a different one of nodes 110a or 110c (e.g., processing device 108a is associated with node 110a when the request is directed to processing device 108a, etc.), node 110b provides the request to the identified one of the other nodes 110a or 110c (e.g., node 110a of processing device 108a, etc.) at 310. In turn, the other of nodes 110a or 110c (e.g., node 110a, etc.) compiles the requested data or retrieves the original data from its corresponding data structure as needed and returns the requested data to node 110 b. Node 11b then sends the requested data to the user or application that initiated the request at 312.

With continued reference to FIG. 3, when the node 110b determines at 304 that the request relates to data common to the plurality of processing devices 108a-e, the node 110b invokes at 314 the transformation worker 114 of the node 110b, which will in turn compile the data as desired. For example, the transformation worker 114 may retrieve data from the other nodes 110a, 110b and aggregate the data (including the data in node 110 b), and then calculate a total number of transactions per second for each processing device (or group of processing devices) processed on the payment network 102, a transaction speed for each processing device (or group of processing devices), a total transaction amount for each region, and the like. Additionally or alternatively, the transformation worker 114 may determine predictions of activity of individual ones of the transaction processing devices 108a-e, which may be used for capacity planning purposes and/or for identification of deviations in performance of the processing devices from certain thresholds and/or specifications (e.g., as an indication of a potential problem or a significant problem, etc.), and so forth.

Then, once the data is compiled by the transformation worker 114 at 314, the node 110b sends the requested data to the user or application that initiated the request at 316.

It should be appreciated that a particular one of nodes 110a-c that receives a given request (even when it is a generic request) may still provide the request to another one of nodes 110a-c for handling. In particular, for example, a monitoring server included in several nodes 110a-c may identify a particular node based on, for example, the number of processing devices 108a-e to connect to the particular node, the lowest latency path to the requested data, the particular node being a random one of nodes 110a-c, the runtime of nodes 110a-c (e.g., selecting the longest-running one of nodes 110, etc.), and so forth. As an example, as shown in fig. 1, node 110b has only one connected processing device 108c, and as such, even when a data request is received, at node 110b, it may still be provided to node 110c for another processing device 108e (consistent with 310). Thereafter, node 110c may compile the data according to the request and provide the data back to node 110a, which node 110a in turn sends to the user or application that initiated the request.

Additionally (or alternatively) in the method 300, the node 110b may provide an alternative connection to its associated processing device 108c, as desired, such as when the node 110b is down (e.g., shut down, etc.). In particular, the node 110b may initially detect the down status of the node 110b at 318 (e.g., as a planned shutdown, as an unplanned shutdown, etc.). In turn, node 110b notifies device 108c of the downtime condition at 320 and identifies another one of the nodes 110a, 110c in the system 100 (e.g., the monitoring server of the other one of the nodes 110a, 110c) to which device 108c may connect when node 110b is downtime at 322. In doing so, for example, node 110b may identify the other of nodes 110a, 110c based on load balancing between nodes 110a, 110c, etc. Then, when the desired one of the other nodes 110a, 110c is identified, the node 110b sends an advisory message to the processing device 108c and provides the monitoring server with address information, such as address information of the other one of the nodes 110a, 110c, at 324. As such, the processing device 108c is able to quickly and efficiently reconnect to an identified one of the other nodes 110a, 110c even if that node is not directly known to the processing device 108c (prior to receiving the advisory message).

In view of the above, the systems and methods herein may provide for distributed data gathering in relation to a payment network, and in particular in relation to processing devices included in and/or associated with the payment network. In connection with this, by relying on nodes arranged in a mesh network and/or utilizing distribution of data between nodes, distributed data gathering may be transparent to users and/or applications from which data is requested, and may also be extended to accommodate changes in the payment network (and/or processing devices included therein). Further, the systems and methods herein provide a shutdown process whereby distributed data gathering is generally made available to users and/or applications at all times. Moreover, the data transformation worker of a node herein is configured to generate an operationally-relevant summary of processed data associated with a client request in the system, and: aggregating data from different processing devices to calculate an instantaneous total transaction amount (TPS) processed per second over a payment network; utilizing current and historical data to generate future predictions of activity of individual transaction processing devices that can be used for capacity planning purposes and to quickly identify deviations from specifications that may indicate problems on the payment network; as described above, predictions are generated for a group of related transaction processing devices; and/or pruning data from the data structure associated with the node once the expected retention period for the data has elapsed.

Again, and as previously mentioned, it should be appreciated that in some embodiments, the functions described herein may be described in terms of computer-executable instructions stored on a computer-readable medium and executable by one or more processors. The computer readable medium is a non-transitory computer readable storage medium. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code and/or loaded files in the form of instructions or data structures (prior to implementation in hardware) and which can be accessed by a processor and/or implemented in hardware. Combinations of the above should also be included within the scope of computer-readable media.

It should also be appreciated that one or more aspects of the present disclosure transform a general-purpose computing device into a special-purpose computing device when configured to perform the functions, methods, and/or processes described herein.

As will be appreciated based on the foregoing specification, the above-described embodiments of the disclosure may be implemented using computer programming or engineering techniques including computer software, firmware, hardware or any combination or subset thereof, wherein a technical effect may be achieved by performing one or more of the following: (a) identifying, by the first node, each other node in the distributed network of nodes; (b) receiving and storing, by a first node, raw data from a plurality of processing devices; (c) detecting a shutdown condition;

(d) identifying, by the first node, one of a plurality of available replacement nodes for each of the processing devices; and (e) sending, by the first node, a proposal message to each of the processing devices, wherein each message includes the identified replacement node for the processing device.

As will also be appreciated based on the foregoing specification, the above-described embodiments of the disclosure may be implemented using computer programming or engineering techniques including computer software, firmware, hardware or any combination or subset thereof, wherein a technical effect may be achieved by performing one or more of the following:

(a) receiving a data request from a client at a first node of a network of nodes; (b) identifying, by the first node, a second node of the network as including the data; (c) forwarding the request for data to the second node; and (d) in response to the request, providing the data to the client after receiving the data from the second node, such that the network of nodes completes the request for the data even if the data is included in the second node and the request is received at the first node.

The exemplary embodiments are provided so that this disclosure will be thorough and will fully convey the scope to those skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known processes, well-known device structures, and well-known techniques have not been described in detail.

The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" may also be intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms "comprises," "comprising," and "having," are inclusive and therefore specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Unless specifically identified as an order of execution, the method steps, processes, and operations described herein are not to be construed as necessarily requiring their execution in the particular order discussed or illustrated. It should also be understood that additional or alternative steps may be employed.

When a feature is referred to as being "on," "engaged to," "connected to," "coupled to," "associated with," "included in," or "in communication with" another feature, it can be directly on, engaged to, connected to, coupled to, associated with, included in, or communicating with the other feature, or intervening features may be present. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.

Although the terms "first," "second," "third," etc. may be used herein to describe various features, these features should not be limited by these terms. These terms may be used only to distinguish one feature from another. Terms such as "first," "second," and other numerical terms used herein do not imply a sequence or order unless clearly indicated by the context. Thus, a first feature discussed herein may be termed a second feature without departing from the teachings of the example embodiments.

Any elements recited in the claims are not intended to be device-plus-function elements in the sense of 35u.s.c § 112 (f). Elements are explicitly recited unless the phrase "means for …" is used or the phrase "operation for …" or "step for …" is used in the context of a method claim.

The foregoing description of the exemplary embodiments has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. This can also be varied in a number of ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.

20页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:处理支付

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!