Data processing method, network equipment, computing node and system

文档序号:955610 发布日期:2020-10-30 浏览:11次 中文

阅读说明:本技术 数据处理的方法、网络设备、计算节点和系统 (Data processing method, network equipment, computing node and system ) 是由 林伟彬 侯新宇 李涛 于 2019-04-30 设计创作,主要内容包括:本申请提供一种数据处理的方法、网络设备、计算节点和系统,该方法包括:网络设备根据第一队列的标识,从计算节点的队列信息存储空间中读取该第一队列的指针信息;再根据该读指针获取该第一队列中待处理数据,并处理该待处理数据;然后,根据该第一写指针更新该读指针所指示的单元的位置。上述技术方案可以减少传统技术中网络设备使用的写指针和计算节点存储的写指针不一致,所导致的网络设备数据处理错误的问题。(The application provides a data processing method, network equipment, a computing node and a system, wherein the method comprises the following steps: the network equipment reads the pointer information of the first queue from the queue information storage space of the computing node according to the identifier of the first queue; then, data to be processed in the first queue is obtained according to the read pointer, and the data to be processed is processed; then, the location of the cell indicated by the read pointer is updated according to the first write pointer. The technical scheme can solve the problem of data processing errors of the network equipment caused by the fact that the write pointer used by the network equipment is inconsistent with the write pointer stored by the computing node in the prior art.)

1. A method of data processing, the method comprising:

the method comprises the steps that a network device reads pointer information of a first queue from a queue information storage space of a computing node according to an identifier of the first queue, wherein the network device is connected with the computing node, the queue information storage space is arranged in a memory of the computing node, the network device is used for realizing communication between the computing node and other computing nodes based on a queue pair, the queue pair comprises a plurality of queues, the first queue is any one of the queues, the pointer information of the first queue comprises a first write pointer and a read pointer, the read pointer is used for indicating the position of a next unit needing to be processed by the network device, and the first write pointer is used for indicating the position of a last unit which allows the computing node to store data at the current moment in the first queue;

The network equipment acquires data to be processed in the first queue according to the read pointer and processes the data to be processed;

and the network equipment updates the position of the unit indicated by the read pointer according to the first write pointer.

2. The method of claim 1, wherein the network device reading pointer information for the first queue from a queue information storage space of a compute node based on an identification of the first queue, comprising:

and the network equipment reads the first write pointer from a first storage space in a queue information storage space of the computing node according to the identifier of the first queue, wherein the first storage space is used for storing the first write pointer.

3. The method of claim 1 or 2, wherein before the network device updates the location of the cell indicated by the read pointer according to the first write pointer, the method further comprises:

the network device reads a second write pointer from a cache of the network device according to the identifier of the first queue, wherein the second write pointer is a write pointer in pointer information of the first queue cached by the network device, and the cache of the network device is used for storing the pointer information of the first queue read by the network device;

The network device comparing the first write pointer and the second write pointer;

and when the network equipment determines that the unit indicated by the first write pointer is a unit for storing the instruction after the unit indicated by the second write pointer stores the instruction, updating the write pointer in the pointer information of the first queue cached by the network equipment to be the first write pointer.

4. The method of claim 3, wherein the network device comparing the first write pointer and the second write pointer comprises:

the network device compares the time information of the first write pointer with the time information of the second write pointer;

when the network device determines that the unit indicated by the first write pointer is a unit for storing an instruction after the unit indicated by the second write pointer stores an instruction, updating the write pointer in the pointer information of the first queue buffered by the network device to the first write pointer, including:

and under the condition that the time information of the first write pointer is determined to be earlier than the time information of the second write pointer, the network equipment updates the write pointer in the pointer information of the first queue cached by the network equipment to the first write pointer.

5. The method of claim 4, wherein the network device comparing the first write pointer and the second write pointer comprises:

when a work queue element in the queue is not in a round robin use, the network device compares a first distance and a second distance, wherein the first distance is a number of units of separation between the first write pointer and the read pointer and the second distance is a number of units of separation between the second write pointer and the read pointer;

when the network device determines that the unit indicated by the first write pointer is a unit for storing an instruction after the unit indicated by the second write pointer stores an instruction, updating the write pointer in the pointer information of the first queue buffered by the network device to the first write pointer, including:

and the network equipment updates a write pointer in the pointer information of the first queue cached by the network equipment to the first write pointer under the condition that the first distance is determined to be greater than the second distance.

6. A method of data processing, the method comprising:

the computing node determines a queue information storage space and a first storage space in the queue information storage space according to the number of queues to be processed and the size of queue information, wherein the computing node is communicated with other computing nodes based on queue pairs through network equipment, each queue pair comprises a plurality of queues, and the computing node can directly perform write operation on the first storage space;

The computing node stores a write pointer of a first queue to the first storage space, wherein the first queue is any one of the queues, and the write pointer of the first queue is used for indicating a position of a last unit, which is allowed to be stored by the computing node at the current time, in the first queue;

the computing node sends an identification of the first queue to the network device.

7. The method of claim 6, wherein the method further comprises:

the computing node determines that the position of the last unit of the data stored by the computing node is allowed to change at the current moment in the first queue;

the computing node updates the position of a unit indicated by the write pointer of the first queue in the first storage space;

and the computing node sends the identifier of the first queue and a first write pointer to the network equipment, wherein the first write pointer is used for indicating the position of the last unit of data which is allowed to be stored by the computing node at the current moment in the updated first queue.

8. A network device, characterized in that the network device comprises:

an obtaining unit, configured to read pointer information of a first queue from a queue information storage space of a computing node according to an identifier of the first queue, where the network device is connected to the computing node, the queue information storage space is disposed in a memory of the computing node, the network device is configured to implement communication between the computing node and other computing nodes based on a queue pair, the queue pair includes multiple queues, the first queue is any one of the multiple queues, the pointer information of the first queue includes a first write pointer and a read pointer, the read pointer is used to indicate a location of a next unit that needs to be processed by the network device, and the first write pointer is used to indicate a location of a last unit that allows the computing node to store data at a current time in the first queue;

The processing unit is used for acquiring data to be processed in the first queue according to the read pointer and processing the data to be processed;

the processing unit is further configured to update the location of the unit indicated by the read pointer according to the first write pointer.

9. The network device according to claim 8, wherein the obtaining unit is specifically configured to read the first write pointer from a first storage space in a queue information storage space of the computing node according to an identifier of the first queue, where the first storage space is used for storing the first write pointer.

10. The network device according to claim 8 or 9, wherein the network device further comprises a storage unit operable to store pointer information of the first queue that has been read by the acquisition unit,

the obtaining unit is further configured to, before the processing unit updates the location of the unit indicated by the read pointer according to the first write pointer, read a second write pointer from the storage unit according to the identifier of the first queue, where the second write pointer is a write pointer in the pointer information of the first queue stored in the storage unit;

The obtaining unit is further configured to compare the first write pointer with the second write pointer before the processing unit updates the location of the unit indicated by the read pointer according to the first write pointer; when the unit indicated by the first write pointer is determined to be a unit for storing the instruction after the unit indicated by the second write pointer stores the instruction, updating the write pointer in the pointer information of the first queue stored in the storage unit to be the first write pointer.

11. The network device according to claim 10, wherein the processing unit is specifically configured to compare the time information of the first write pointer with the time information of the second write pointer; updating the write pointer in the pointer information of the first queue buffered by the network device to the first write pointer if it is determined that the time information of the first write pointer is earlier than the time information of the second write pointer.

12. The method according to claim 11, wherein the processing unit is specifically configured to compare the first distance and the second distance when a work queue element in the queue is not in a round-robin use; updating a write pointer in pointer information of the first queue buffered by the network device to the first write pointer if it is determined that the first distance is greater than the second distance, wherein the first distance is the number of units of space between the first write pointer and the read pointer, and the second distance is the number of units of space between the second write pointer and the read pointer.

13. A computing node, wherein the computing node comprises:

the processing unit is used for determining a queue information storage space and a first storage space in the queue information storage space according to the number of queues to be processed and the size of queue information, wherein the queue information storage space is arranged in the storage unit of the computing node, the computing node is communicated with other computing nodes through network equipment on the basis of queue pairs, each queue pair comprises a plurality of queues, and the computing node can directly write in the first storage space;

the processing unit is further configured to store a write pointer of a first queue to the first storage space, where the first queue is any one of the queues, and the write pointer of the first queue is used to indicate a location of a last unit in the first queue, where the last unit allows the computing node to store data at the current time;

a sending unit, configured to send the identifier of the first queue to the network device.

14. The computing node of claim 13, wherein the processing unit is further to determine that a location of a last unit of the first queue that the computing node is permitted to store data at a current time has changed;

Updating the position of a unit indicated by the write pointer of the first queue in the first storage space;

the sending unit is further configured to send, to the network device, an identifier of the first queue and a first write pointer, where the first write pointer is used to indicate a location of a last unit in the updated first queue, where the computing node is allowed to store data at the current time.

15. A network device comprising a processor and a memory, the memory storing program code, the processor being configured to invoke the program code in the memory to perform the method of any of claims 1 to 5.

16. A computing node, characterized in that the computing node comprises a processor and a memory, the memory storing program code, the processor being configured to invoke the program code in the memory to perform the method according to claim 6 or 7.

17. A system, characterized in that the system comprises a network device according to any of claims 8 to 12 and a computing node according to claim 13 or 14.

Technical Field

The present application relates to the field of information technology, and more particularly, to a method, a network device, a computing node, and a system for data processing.

Background

The source computing node in the data center network can send the data to be processed to the target network equipment through the source network equipment, and then the target network equipment writes the received data into the target computing node. The source network device and the target network device may be network cards. The source network device and the target network device may be used for communication between a source compute node and a destination compute node.

The above data transmission process needs to be implemented on the basis of queues (queue). Each queue includes a plurality of cells. The pointer is used to indicate the unit to be processed in the current queue and the unit newly added to the current queue. In a conventional technical solution, a source network device may first obtain a physical address of a write pointer from a source computing node, where the write pointer is stored in a storage space corresponding to the physical address. And the source network equipment acquires the write pointer from the source computing node according to the physical address of the write pointer. The write pointer is used to indicate the unit that was newly added to the current queue. In this way, the source network device can determine the last element of the current queue (i.e., the element that was newly added to the current queue) based on the write pointer. However, instead of synchronizing pointer information in real-time between the source computing node and the source network device, the source computing node may continue to write new cells in the current queue after the source computing node notifies the source network device of the write pointer. In this case, the unit indicated by the write pointer acquired by the source network device is not the unit newly written in the current queue. In summary, because the write pointer acquired by the source network device and the write pointer in the source computing node are not synchronized in real time, data inconsistency occurs, so that the write pointer acquired by the source network device is not the latest write pointer, and the processing process of the network device is affected.

However, as the data processing amount in the data center increases, data of different service applications are transmitted between the computing nodes through a plurality of queues, the number of the queues and the data to be processed by each network device also increase, the network device cannot update the stored write pointer according to the latest write pointer stored in the computing node in real time, and the write pointer stored in the network device and the write pointer generated by the computing node are inconsistent, so that a problem of data processing error occurs.

Disclosure of Invention

The application provides a data processing method, network equipment, a computing node and a communication system, which can reduce the problem of data processing errors of the network equipment caused by the inconsistency of a write pointer used by the network equipment and a write pointer stored by the computing node

In a first aspect, the present application provides a method of data processing, the method comprising: the method comprises the steps that a network device reads pointer information of a first queue from a queue information storage space of a computing node according to an identifier of the first queue, wherein the network device is connected with the computing node, the queue information storage space is arranged in a memory of the computing node, the network device is used for realizing communication between the computing node and other computing nodes based on a queue pair, the queue pair comprises a plurality of queues, the first queue is any one of the queues, the pointer information of the first queue comprises a first write pointer and a read pointer, the read pointer is used for indicating the position of the next unit needing to be processed by the network device, and the first write pointer is used for indicating the position of the last unit which allows the computing node to store data at the current moment in the first queue; the network equipment acquires the data to be processed in the first queue according to the read pointer and processes the data to be processed; the network device updates the location of the cell indicated by the read pointer according to the first write pointer. The write pointer of the first queue in the queue information storage space of the computing node is the write pointer that the computing node performs maintenance.

Therefore, in the above technical solution, the write pointer of the first queue read from the queue information storage space by the network device is the latest write pointer determined by the computing node. The problem of data processing errors of the network equipment caused by inconsistency between the write pointer used by the network equipment and the write pointer stored by the computing node in the traditional technology can be solved. In the data processing, after the position of the write pointer is updated by the computing node each time, the write pointer is directly transmitted to the network equipment, so that the operation that the network equipment can acquire the pointer only by two times of read operations (the first time of the read operation acquires the physical position of the write pointer stored in the computing node, and the second time of the read operation reads the write pointer according to the physical position) in the traditional technology is avoided, the number of the read operations is reduced, the system processing capacity is reduced, and the system processing efficiency is improved. In addition, two memory spaces are required to store one write pointer in the conventional technology. Specifically, one of the two memory spaces holds the address of the write pointer, and the other holds the write pointer. The method provided by the application can save the write pointer by using only one storage space. Thus, memory space can be saved.

In one possible implementation manner, the reading, by the network device, pointer information of the first queue from a queue information storage space of the computing node according to the identifier of the first queue includes: the network device reads the first write pointer from a first storage space in a queue information storage space of the computing node according to the identifier of the first queue, wherein the first storage space is used for storing the first write pointer.

In another possible implementation manner, before the network device updates the location of the cell indicated by the read pointer according to the first write pointer, the method further includes: the network device reads a second write pointer from a cache of the network device according to the identifier of the first queue, wherein the second write pointer is a write pointer in pointer information of the first queue cached by the network device, and the cache of the network device is used for storing the pointer information of the first queue read by the network device; the network device comparing the first write pointer and the second write pointer; and when the network equipment determines that the unit indicated by the first write pointer is a unit for storing instructions after the unit indicated by the second write pointer stores instructions, updating the write pointer in the pointer information of the first queue buffered by the network equipment to the first write pointer. Based on the technical scheme, the network device can update the write pointer of the buffered first queue in real time, so that the write pointer of the buffered first queue is consistent with the write pointer in the queue information storage space of the network device.

In another possible implementation, the network device comparing the first write pointer and the second write pointer includes: the network device compares the time information of the first write pointer with the time information of the second write pointer; when the network device determines that the unit indicated by the first write pointer is a unit for storing an instruction after the unit indicated by the second write pointer stores an instruction, the network device updates the write pointer in the pointer information of the first queue buffered by the network device to the first write pointer, including: and the network equipment updates the write pointer in the pointer information of the first queue buffered by the network equipment to the first write pointer under the condition that the time information of the first write pointer is determined to be earlier than the time information of the second write pointer.

In another possible implementation, the network device comparing the first write pointer and the second write pointer includes: when a work queue element in the queue is not in use, the network device comparing a first distance and a second distance, wherein the first distance is the number of units of space between the first write pointer and the read pointer and the second distance is the number of units of space between the second write pointer and the read pointer; when the network device determines that the unit indicated by the first write pointer is a unit for storing an instruction after the unit indicated by the second write pointer stores an instruction, the network device updates the write pointer in the pointer information of the first queue buffered by the network device to the first write pointer, including: and the network equipment updates the write pointer in the pointer information of the first queue cached by the network equipment to the first write pointer under the condition that the first distance is determined to be greater than the second distance.

In another possible implementation manner, the reading, by the network device, pointer information of the first queue from a queue information storage space of the computing node according to the identifier of the first queue includes: the network equipment acquires the identifier of the first queue sent by the computing node; and the network equipment reads the pointer information of the first queue in the queue information storage space according to the identifier.

In another possible implementation manner, the obtaining, by the network device, the identifier of the first queue includes: the network equipment identifies that a preset field in a doorbell storage space changes, wherein the doorbell storage space is arranged in the network equipment and is used for updating the preset field in the doorbell storage space by the computing node when the write pointer of the first queue is updated; the network device reads the identification of the first queue from the doorbell storage space.

In another possible implementation manner, the acquiring, by the network device, the identifier of the first queue includes: the network device receives the identification of the first queue sent by the computing node.

In a second aspect, the present application provides a method of data processing, the method comprising: the computing node determines a queue information storage space and a first storage space in the queue information storage space according to the number of queues to be processed and the size of queue information, wherein the computing node is communicated with other computing nodes based on queue pairs through network equipment, each queue pair comprises a plurality of queues, and the computing node can directly perform write operation on the first storage space; the computing node stores a write pointer of a first queue to the first storage space, wherein the first queue is any one of the queues, and the write pointer of the first queue is used for indicating the position of a last unit of data which is allowed to be stored by the computing node at the current time in the first queue; the computing node sends an identification of the first queue to the network device. The write pointer of the first queue in the queue information storage space of the computing node is the write pointer that the computing node performs maintenance. In the data processing, after the position of the write pointer is updated by the computing node each time, the write pointer is directly transmitted to the network equipment, so that the operation that the network equipment can acquire the pointer only by two times of read operations (the first time of the read operation acquires the physical position of the write pointer stored in the computing node, and the second time of the read operation reads the write pointer according to the physical position) in the traditional technology is avoided, the number of the read operations is reduced, the system processing capacity is reduced, and the system processing efficiency is improved. In addition, two memory spaces are required to store one write pointer in the conventional technology. Specifically, one of the two memory spaces holds the address of the write pointer, and the other holds the write pointer. The method provided by the application can save the write pointer by using only one storage space. Thus, memory space can be saved.

In one possible implementation, the method further includes: the computing node determines that the position of the last unit of the data stored by the computing node is allowed to change at the current moment in the first queue; the computing node updates the position of the unit indicated by the write pointer of the first queue in the first storage space; the computing node sends the identifier of the first queue and a first write pointer to the network device, wherein the first write pointer is used for indicating the position of the last unit of data which is allowed to be stored by the computing node at the current moment in the updated first queue.

In a third aspect, the present application provides a network device comprising means for performing the first aspect or any one of its possible implementations.

In a fourth aspect, the present application provides a computing node comprising means for performing the second aspect or any of its possible implementations.

In a fifth aspect, the present application provides a network device comprising a processor and a memory, the memory storing program code, the processor being configured to invoke the program code in the memory to perform the method according to the first aspect or any of the possible implementation manners of the first aspect

In a sixth aspect, the present application provides a computing node comprising a processor and a memory, the memory storing program code, the processor being configured to invoke the program code in the memory to perform a method according to the second aspect or any of the possible implementations of the second aspect

In a seventh aspect, the present application provides a computer-readable storage medium storing instructions for implementing the method of the first aspect or any one of the possible implementations of the first aspect.

In an eighth aspect, the present application provides a computer-readable storage medium storing instructions for implementing the method of the second aspect or any one of its possible implementations.

In a ninth aspect, the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the first aspect or any one of the possible implementations of the first aspect.

In a tenth aspect, the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the second aspect or any of the possible implementations of the second aspect.

In an eleventh aspect, the present application further provides a system comprising the network device of the third aspect and the computing node of the fourth aspect.

In a twelfth aspect, the present application further provides a system that includes the network device of the fifth aspect and the computing node of the sixth aspect.

Drawings

Fig. 1 is a schematic diagram of a system architecture provided in an embodiment of the present application.

FIG. 2 is a schematic flow chart diagram of a method of processing data provided herein.

Fig. 3 is a block diagram of a network device according to an embodiment of the present application.

Fig. 4 is a block diagram of another network device according to an embodiment of the present invention.

Fig. 5 is a block diagram of a computing node according to an embodiment of the present application.

Fig. 6 is a block diagram of another computing node provided in accordance with an embodiment of the present invention.

Detailed Description

The technical solution in the present application will be described below with reference to the accompanying drawings.

The technical scheme of the embodiment of the application can be applied to network equipment and computing nodes supporting remote direct data access (RDMA) technology. For example, network devices and computing nodes of a data center supporting RDMA technology, or other network devices and computing nodes supporting RDMA technology. The computing node may be connected to a network device, and the computing node refers to a device with computing capability, such as a server, a personal computer (e.g., a desktop computer device, a notebook computer), and the like. The network device may be referred to as a network device of the computing node. The network device is a hardware device capable of connecting a computing node connected to the network device to a computer network by wire or wirelessly. In other words, the computing node may access the computer network through the computing node's network device. The network device may also be referred to as a Network Interface Card (NIC), a network adapter (network adapter), a physical network interface (physical network interface), and the like. The network device may be an RDMA technology enabled network device, such as an RDMA technology enabled remote direct data access interface card (RNIC).

Optionally, in some embodiments, the network device of the computing node may be built-in to the computing node. For example, the network device of the computing node may be connected to the motherboard of the computing node through an interface such as a peripheral component Interconnect Express (PCIe) interface or a cache coherent Interconnect for accelerator (CCIX) interface.

Optionally, in other embodiments, the network device of the computing node may be an external device of the computing node. For example, the network device may be connected to the computing node via a Universal Serial Bus (USB) interface.

In an embodiment of the present application, a compute node includes a hardware layer, an operating system layer running above the hardware layer, and an application layer running above the operating system layer. The hardware layer includes hardware such as a Central Processing Unit (CPU), a Memory Management Unit (MMU), and a memory (also referred to as a main memory). The operating system may be any one or more computer operating systems that implement business processing through processes (processes), such as a Linux operating system, a Unix operating system, an Android operating system, an iOS operating system, or a windows operating system. The application layer comprises applications such as a browser, an address list, word processing software, instant messaging software and the like. In addition, the embodiment of the present application does not particularly limit a specific structure of the execution main body of the method provided in the embodiment of the present application, as long as the execution main body can communicate with the method provided in the embodiment of the present application by running the program recorded with the code of the method provided in the embodiment of the present application, for example, the execution main body of the method provided in the embodiment of the present application may be a computing node, or a functional module capable of calling a program and executing the program in the computing node.

Fig. 1 is a schematic diagram of a system architecture provided in an embodiment of the present application. System 100 as shown in fig. 1 includes a compute node 110, a network device 111, a compute node 120, and a network device 121.

Computing node 110 and computing node 120 may communicate with each other through respective network devices. The network device of computing node 110 may be network device 111 and the network device of computing node 120 may be network device 121.

The network device 111 and the network device 121 may be connected through a communication link, and a medium of the communication link may be an optical fiber, and the embodiment of the present application does not limit a specific medium of the communication link between the network devices. Network device 111 and network device 121 may include one or more switching nodes or may communicate directly. As shown in fig. 1, the computing node 110 includes a storage device 112 therein, and the storage device 112 may be used for storing queue information of the computing node 110. The compute node 120 includes a storage device 122 therein, and the storage device 122 may be used to store queue information of the compute node 120.

Although storage 112 is shown in fig. 1 as being within computing node 110 and storage 122 is shown as being within computing node 120, storage 112 may be a storage that is external to computing node 110 or network device 111, and storage 122 may be a storage that is external to computing node 120 or network device 121.

It will be appreciated that fig. 1 only shows the connection of two computing nodes through a network device. More computing nodes may be included in some RDMA technology-enabled networks (e.g., data center networks). Any two computing nodes in such a network may be connected by a method as shown in fig. 1. In other words, the system 100 shown in FIG. 1 may be a connection for any two computing nodes in a network that supports RDMA technology.

How two compute nodes in a network supporting RDMA technology process data is described below in conjunction with fig. 2.

FIG. 2 is a schematic flow chart diagram of a method of processing data provided herein. As shown in fig. 2, the queue implementation method includes the following steps:

in step 201, a first computing node and a second computing node establish a connection and create a Queue Pair (QP).

In particular, the first computing node and the second computing node may establish a connection through respective network devices. For ease of description, the network device of the first computing node will be referred to hereinafter as the first network device and the network device of the second computing node will be referred to hereinafter as the second network device.

Taking fig. 1 as an example, the first computing node is computing node 110, the first network device may be network device 111, the second computing node may be computing node 120, and the second network device may be network device 121.

It is worth to be noted that, in the data transmission process between the first computing node and the second computing node, when the first computing node sends data to the second computing node, the first computing node is a source computing node, and correspondingly, the first network device is a source network device; the second computing node is a destination computing node and, correspondingly, the second network device is a destination network device. When the second computing node sends data to the first computing node, the second computing node is a source computing node, and correspondingly, the second network equipment is source network equipment; the first computing node is a destination computing node and, accordingly, the first network device is a destination network device. For convenience of description, the following description of the present application takes the example of a first computing node sending data to a second computing node.

Suppose that the queue created by the first computing node is the Send Queue (SQ) in the queue pair and the second node creates the receive queue (receive queue) in the queue pair. The send queue may be stored in a storage device of the first computing node and the receive queue may be stored in a storage device of the second computing node.

One or more queue work elements (WQEs) are included in a send queue created by a first computing node. One or more WQEs are included in a receive queue created by the second compute node. The number of WQEs in the send and receive queues belonging to the same QP may be the same or different. When the number of WQEs contained in the send queue and the receive queue belonging to the same QP is different, the number of WQEs in the receive queue needs to be greater than or equal to the number of WQEs in the send queue, so as to ensure that all commands in the first computing node can have a sufficient number of WQEs stored in the receive queue.

After the send queue is created, the first computing node may store the data to be sent in the WQE of the send queue. The sending of data by a first compute node to a second compute node is done at WQE granularity. Optionally, in some possible implementations, the data sent by the first computing node to the second computing node may be directly stored in the WQE. This mode of storing data directly to WQEs may be referred to as a first mode, which may also be referred to as an inline mode, or inline. Optionally, in other possible implementations, the first computing node may further store the storage location information of the sent data to the WQE. The storage location information of the to-be-processed data may include a location (which may also be referred to as an address) and a length at which the to-be-processed data is stored in the first computing node. This mode of storing the storage location information of the data to be transmitted to the WQE may be referred to as a second mode, which may also be referred to as a non-inline mode, or a non-inline.

The first computing node may determine whether to use the inline mode or the non-inline mode according to a preset rule. For example, in some embodiments, the first compute node may select whether to use inline mode or non-inline mode depending on the size of the data to be processed (which may also be the length of the data). If the size of the data to be processed is larger than a preset threshold value, selecting a non-inline mode to send the data; and if the size of the data to be processed is smaller than or equal to a preset threshold, selecting an inline mode to send the data. For another example, in other embodiments, the first computing node may determine whether to select inline mode or send data using non-inline mode based on the type of data to be processed. For example, if the type of the data to be processed is one of one or more preset data types, the data to be processed may be sent using an inline mode; and if the type of the data to be processed does not belong to the preset one or more data types, sending the data to be processed by using a non-inline mode. Further, when determining the mode according to the type of the data to be processed, the length of the data to be processed also needs to be considered, i.e. the size of the data using the inline mode needs to be smaller than or equal to the preset threshold. Therefore, the data type satisfying the size of the data transmitted using the inline mode also needs to be less than or equal to the preset threshold. In addition to the above two preset rules, other preset rules may be utilized to determine whether to select the inline mode or the non-inline mode for transmitting the pending number. The embodiment of the present application does not limit a specific implementation manner of how to determine whether to select the inline mode or the non-inline mode for sending the to-be-processed data.

At least one element may be included in the transmit queue. In some possible implementations, the elements of a send queue are in a one-to-one correspondence with WQEs in the send queue. In other words, each element in the send queue is a WQE. In other possible implementations, multiple elements in a send queue correspond to a WQE in the send queue. In other words, one WQE of the send queue may be composed of multiple elements.

Similarly, at least one cell may be included in the receive queue. In some possible implementations, the elements in one receive queue are in a one-to-one correspondence with the WQEs in the receive queue. In other words, each element in the receive queue is a WQE. In other possible implementations, multiple elements in a receive queue correspond to a WQE in the receive queue. In other words, a WQE of the receive queue may be composed of multiple elements.

The depth of the transmit queue (i.e., the number of elements included in the transmit queue) is determined when the queue is created. As mentioned above, the relationship between the elements in the send queue and the WQEs may be that one element is a WQE, or that a plurality of elements constitute a WQE. Therefore, if the depth of the send queue is determined, the number of WQEs included in the send queue is also determined. In some cases, the number of WQEs required for data that a first compute node needs to send to a second compute node may exceed the number of elements included in the send queue created by the first compute node. In some possible implementations, the first computing node may recycle the number of units in the send queue. In other possible implementations, the first computing node may expand the sending queue, that is, increase the number of cells in the sending queue, and send data using the newly added cells.

Accordingly, the depth of the receive queue is also determined at the time of creation of the queue. Similar to the send queue, the relationship between an element in the receive queue and a WQE may be one WQE for an element, or one WQE for a plurality of elements. Therefore, if the depth of the send queue is determined, the number of WQEs included in the send queue is also determined. In some cases, the number of WQEs required by the second compute node to receive data from the first compute node may exceed the number of elements included in the receive queue created by the second compute node. In some possible implementations, the second compute node may recycle the number of elements in the receive queue. In other possible implementations, the second computing node may expand the receiving queue, that is, increase the number of cells in the receiving queue, and receive data using the newly added cells.

How the first and second compute nodes recycle the WQEs in the queue and how the queue is expanded will be described later.

Fig. 1 only shows a transmission queue and a reception queue, and the embodiments of the present application are also applicable to other forms of queues such as a Complete Queue (CQ), a Submission Queue (SQ), and the following contents of the embodiments of the present application are described in detail by taking the transmission queue and the reception queue as examples for convenience of description.

A first computing node may create queue information corresponding to a transmit queue and a second computing node may create queue information corresponding to a receive queue. In order to distinguish between queue information corresponding to a transmission queue and queue information corresponding to a reception queue, the queue information corresponding to the transmission queue is hereinafter referred to as transmission queue information, and the queue information corresponding to the reception queue is hereinafter referred to as reception queue information.

The transmit queue information may also be referred to as transmit queue context, transmit queue related information, transmit queue context information, and the like. The transmit queue information may include characteristics of the transmit queue and the operational status of the transmit queue. Characteristics of the send queue include a physical address of pending WQEs in the send queue in a storage of the first compute node, an index address of the send queue, and a send queue depth. The index address of the send queue may indicate a physical address of each WQE in the send queue in storage in the first computing node. The transmit queue depth is used to indicate the number of units contained in the transmit queue. The working state of the transmission queue includes a transmission queue state and a transmission queue validity, and the transmission queue state is used for indicating the current state of the transmission queue, for example, the current state of the transmission queue may be a reset state, an initialization state, an error state, and the like. The transmit queue validity is used to indicate whether a transmit queue exists. The operational status of the transmit queue may also include pointer information. More specifically, the pointer information may include a read pointer (which may also be referred to as a consumer pointer) and a write pointer (which may also be referred to as a producer pointer). The read pointer points to a unit (hereinafter referred to as a current unit to be processed) in the transmission queue, where the network device is to read the instruction at the current time. The current unit to be processed refers to a unit being processed according to an Input Output (IO) command. For example, where elements are in a one-to-one correspondence with WQEs, the read pointer points to the location of the current element to be processed. In the case where multiple elements correspond to a WQE, the read pointer points to the first element of the multiple elements that make up the currently processed WQE. The write pointer points to the location of the unit of the issue queue that most recently stored the instruction. For example, where elements are in a one-to-one correspondence with WQEs, the write pointer points to the location of the element storing the WQE of the instruction to the issue queue. In the case where multiple elements correspond to a WQE, the write pointer points to the location of the last element of the multiple elements of WQEs that make up the most recent store instruction in the issue queue.

The send queue information may be stored in a storage device that the first computing node may directly or indirectly access. Indirect access to the storage means as referred to herein means that the storage means is accessible through the network device of the first computing node, i.e. the first network device. Also taking FIG. 1 as an example, if the first compute node is compute node 110, then the storage device may be storage device 112.

In some possible embodiments, the storage device may be a main memory of the first computing node. In other possible embodiments, the storage device may be memory external to the first computing node. The plug-in memory can be a plug-in memory at the first computing node or a plug-in memory at the first network device. The main memory and the plug-in memory may be Random Access Memory (RAM). The RAM may be a double data rate synchronous dynamic random access memory (ddr sdram), a Dynamic Random Access Memory (DRAM), a Static Random Access Memory (SRAM), or the like. The storage device may be a dedicated storage device dedicated to storing queue information or may be a general purpose storage device that may be used to store the queue information and other information. For example, the send queue information and the send queue may be stored in the same storage device. For convenience of description, a storage space for storing queue information in the storage device will be referred to as a queue information storage space hereinafter.

It will be appreciated that the first computing node in the embodiment shown in fig. 2 creates a send queue and send queue information as the source computing node. In other embodiments, the first computing node may also create a receive queue and receive queue information as a destination computing node. The queue information storage space may also be used to store receive queue information.

According to the read-write permission, the queue information storage space can be divided into three parts: storage space 1, storage space 2 and storage space 3. The control authority of the storage space 1, the storage space 2 and the storage space 3 is as follows:

the first network device may directly perform read and write operations to the storage space 1. The first computing node may not write directly or indirectly to memory space 1, but the first computing node may read from memory space 1.

The first network device may directly read from the memory space 2. The first computing node may write to the memory space 2 indirectly.

The first network device may directly read the memory space 3. The first computing node may write directly to the memory space 3.

Optionally, the first computing node indirectly performs the write operation on the storage space 2, that is, the first computing node sends the data to be written into the storage space 2 to the first network device, and the first network device writes the data into the storage space 2.

Optionally, the indirectly writing the storage space 2 by the first network device may include: a first computing node applies for a lock controlled by a first network device to the first network device; when a first computing node applies for a lock, in order to avoid data inconsistency, the first computing node needs to lock a storage space 2 to be written, and at the moment, the first computing node is only allowed to write the storage space 2; then, the first computing node writes the data to be written into the storage space 2, and after the write operation is completed, the first computing node applies for releasing the lock to the first network device. If the first compute node does not apply for the lock, then no write operation can be performed to the storage space 2. The first computing node may apply for a lock controlled by the first network device by sending a lock request to the first network device to apply for the lock. The first network device may send feedback to the first compute node informing the first compute node whether or not to apply for the lock. After completing the write operation, the first computing node may send a release request to the first network device for a release of the lock. The first computing node may send the lock request and the release request directly to the first network device over a communication interface between the first computing node and the first network device. In some possible implementations, the first network device may also send the feedback directly to the first computing node using the communication interface. In other possible implementations, the first network device may write the feedback to a specified location in the storage. The first computing node may obtain the feedback by reading the content stored at the specified location.

The first computing node directly writes the storage space 3 means that the first computing node can directly write the data to be written into the storage space 3 without writing the data to be written into the storage space 3 through other devices (for example, the first network device).

Storage space 1, storage space 2 and storage space 3 may hold different contents in the queue information. As described above, the send queue information may include the physical address of the pending WQE in the storage of the first compute node, the index address of the send queue, the send queue depth, the send queue status, the send queue validity, and pointer information.

Optionally, in some possible implementations, different contents in the sending queue information may be stored in different storage spaces according to whether the computing node is allowed to be modified and whether the units in the processing queue of the network device are affected during the modification of the computing node.

Specifically, the contents of the transmission queue information that are not allowed to be modified by the computing node may be stored in the storage space 1. The sending queue information allows the computing node to make modifications, but the modification process may save in the storage space 2 what affects the elements in the network device processing queue. The contents of the transmission queue information that allow the computing node to modify and do not affect the elements in the processing queue of the network device during the modification process can be stored in the storage space 3.

For example, in some embodiments, the first computing node may expand the created send queue, i.e., change the send queue depth. However, if the network device continues to process the cells in the queue during the re-expansion process, an error may occur. Thus, the transmit queue depth may be preserved in the memory space 2. Thus, the first computing node needs to modify the transmit queue depth by the first network device. The first network device may stop processing the send queue if it is determined that the first computing node is to modify the queue depth. And after the depth of the sending queue is modified, the sending queue is processed.

As another example, the write pointer points to the location of the most recent store instruction. Updating the write pointer does not affect the processing of the transmit queue by the first network device. Thus, the write pointer can be saved in the memory space 3.

Optionally, in other possible implementations, the content stored in each storage space may be set according to a preset rule. For example, the preset rule may be: the write pointer is stored in the storage space 3, and the storage mode of the contents other than the write pointer in the queue information is stored in the mode of storing the queue information in the existing RDMA system. As another example, the preset rule may be: the write pointer and the read pointer are stored in the storage space 3, and the storage mode of the content except the write pointer and the read pointer in the queue information is stored according to the mode of storing the queue information in the existing RDMA system.

Optionally, in some possible implementations, the first computing node may divide the queue information storage space from the storage space of the storage device in an initialization stage of the first computing node, and divide the queue information storage space into a storage space 1, a storage space 2, and a storage space 3 according to sizes of contents to be stored in different types of storage spaces. Each queue has a set of storage spaces 1, 2 and 3 for storing queue information for only one queue to which it is bound. The initialization phase of the first computing node is a phase of setting the first computing node before the first computing node starts working. In the initialization stageAfter the segment, the first compute node may begin entering the run phase. The run phase refers to a phase in which the first computing node may provide services such as data reading, writing, and the like. The first computing node may determine the size of the queue information storage space according to the number of queues that can be processed simultaneously. For example, if the first computing node can process K queues simultaneously, and the queue information of each of the K queues needs a storage space of R bits to store (assuming that R is a positive integer greater than or equal to 1), the first computing node may determine that the size of the queue information storage space is K × R bits. The first computing node can store the size of the information in the storage space 1, the storage space 2 and the storage space 3 as required, and divide the queue information storage space into three different types of storage spaces. It is also assumed that the first compute node can process K queues simultaneously. Assuming that the size of information to be stored in the storage space 1 is R in each queue information 1bit, the size of the information to be stored in the storage space 2 is R2bit, the size of the information to be stored in the storage space 3 is R3bit, then the size of the storage space 1 can be determined to be K multiplied by R1bit, the size of the storage space 2 is K multiplied by R2bit, the size of the storage space 3 is K multiplied by R3bit,R1、R2And R3The sum of (A) and (B) is R.

Similar to the transmit queue information, the receive queue information may also be referred to as receive queue context, receive queue related information, receive queue context information, and the like. The receive queue information may include characteristics of the receive queue and the operational status of the receive queue. Characteristics of the receive queue include the physical address of the WQE to be processed in the receive queue in the memory of the destination compute node, the index address of the receive queue, and the receive queue depth. The index address of the receive queue may indicate the physical address of each WQE in the receive queue in memory in the destination compute node. The receive queue depth is used to indicate the number of units contained in the receive queue. The working state of the receive queue includes a receive queue state and a receive queue validity, and the receive queue state is used to indicate the current state of the receive queue, for example, the current state of the receive queue may be a reset state, an initialization state, an error state, and the like. Receive queue validity is used to indicate whether a receive queue exists. The operational state of the receive queue may also include pointer information. More specifically, the pointer information may include a read pointer and a write pointer. The read pointer points to a unit (hereinafter referred to as a current unit to be processed) in the receive queue, where the network device is to read the instruction at the current time. The current unit to be processed refers to a unit being processed according to an Input Output (IO) command. For example, where elements are in a one-to-one correspondence with WQEs, the read pointer points to the location of the current element to be processed. In the case where multiple elements correspond to a WQE, the read pointer points to the first element of the multiple elements that make up the currently processed WQE. The write pointer refers to the location of the most recently stored instruction. For example, where elements are in a one-to-one correspondence with WQEs, the write pointer points to the location of the element that most recently stores the WQE of the instruction to the receive queue. In the case where multiple elements correspond to a WQE, the write pointer points to the location of the last element of the WQE multiple elements of the latest store instruction in the receive queue.

Similar to the first computing node maintaining the send queue information, the receive queue information may be maintained in a storage device that is directly or indirectly accessible to the second computing node. Indirect access to the storage means as referred to herein means that the storage means is accessible through a network device of the second computing node, i.e. the second network device. Also taking FIG. 1 as an example, if the second compute node is compute node 120, then the storage device may be storage device 122.

Similar to the storage device holding the send queue information, in some embodiments, the storage device holding the receive queue information may be a main memory of the second computing node. In other embodiments, the storage device holding the receive queue information may be a memory external to the second computing node. The plug-in memory can be plugged in the second computing node or can be plugged in the second network equipment. The main memory and the plug-in memory may be random access memory. The random access memory may be a double data rate synchronous dynamic random access memory, a static random access memory, or the like. The storage device may be a dedicated storage device dedicated to storing queue information or may be a general storage device that may be used to store queue information and other information. For convenience of description, a storage space in the storage device for storing the reception queue information will be referred to as a queue information storage space hereinafter. It will be appreciated that the second computing node in the embodiment shown in fig. 2 creates a receive queue and receive queue information as the destination computing node. In other embodiments, the second computing node may also create a send queue and send queue information as the source computing node. The queue information storage space may also be used to store the transmit queue information.

Optionally, according to the read-write permission, the queue information storage space may also be divided into three parts: storage space 4, storage space 5 and storage space 6. The control authority of the storage space 4, the storage space 5 and the storage space 6 is as follows:

the second network device may directly perform read and write operations to the memory space 4. The second computing node may not write directly or indirectly to the memory space 4, but the second computing node may read from the memory space 4.

The second network device may directly read the memory space 5. The second computing node may indirectly write to the memory space 5.

The second network device may read directly from the memory space 6. The second computing node may write directly to the memory space 6.

Optionally, the second computing node indirectly performs the write operation on the storage space 5, that is, the second computing node sends the data to be written into the storage space 5 to the second network device, and the second network device writes the data to be written into the storage space 2.

Optionally, the indirectly writing the storage space 5 by the second computing node may include: the second computing node applies for a lock controlled by the second network device to the second network device; when the second computing node applies for the lock, in order to avoid data inconsistency, the second computing node needs to lock the storage space 5 to be written, and at this time, the second computing node is only allowed to write the storage space 5; then, the second computing node writes the data to be written into the storage space 5, and after the write operation is completed, the second computing node applies for releasing the lock to the second network device. If the second compute node does not apply for a lock, then no write operation can be performed to the storage space 5.

The second computing node directly writes the storage space 6 means that the second computing node can directly write the data to be written into the storage space 6 without writing the data to be written into the storage space 6 by other means (for example, a second network device).

Storage space 4, storage space 5 and storage space 6 may hold different contents of the queue information. As described above, the receive queue information may include the physical address of the pending WQE in the storage of the second compute node, the index address of the receive queue, the receive queue depth, the receive queue status, the receive queue validity, and pointer information. Different contents in the receive queue information may be stored in different memory spaces depending on whether modifications are allowed and whether the processing of the queue is affected during the modification.

Optionally, in some possible implementations, different contents in the receive queue information may be stored in different storage spaces according to whether the computing node is allowed to be modified and whether the units in the network device processing queue are affected during the modification of the computing node.

Specifically, the content of the reception queue information that is not allowed to be modified by the computing node may be stored in the storage space 4. The receiving queue information allows the computing node to make modifications, but the content that the modification process would have an impact on the elements in the network device processing queue may be stored in the memory space 5. The content of the received queue information that allows the computing node to modify and does not affect the elements in the network device processing queue during the modification process may be stored in the storage space 6.

For example, in some embodiments, the second computing node may expand the created receive queue, i.e., change the receive queue depth. However, if the network device continues to process the cells in the queue during the re-expansion process, an error may occur. Thus, the receive queue depth may be preserved in the memory space 5. Thus, the second computing node needs to modify the receive queue depth by the second network device. The second network device may stop processing the receive queue if it is determined that the second computing node is to modify the queue depth. And after the depth of the receiving queue is modified, the receiving queue is processed again.

As another example, the write pointer points to the location of the most recent store instruction. Updating the write pointer does not affect the processing of the receive queue by the second network device. Thus, the write pointer can be saved in the memory space 6.

Optionally, in other possible implementations, the content stored in each storage space may be set according to a preset rule. For example, the preset rule may be: the write pointer is stored in the storage space 6, and the storage mode of the contents other than the write pointer in the queue information is stored in the manner of storing the queue information in the existing RDMA system. As another example, the preset rule may be: the write pointer and the read pointer are stored in the storage space 3, and the storage mode of the content except the write pointer and the read pointer in the queue information is stored according to the mode of storing the queue information in the existing RDMA system.

Optionally, in some possible implementations, the second computing node may divide the queue information storage space from the storage space of the storage device in an initialization stage of the second computing node, and divide the queue information storage space into the storage space 4, the storage space 5, and the storage space 6 according to sizes of contents to be stored in different types of storage spaces. The initialization phase of the second computing node is a phase of setting the second computing node before the second computing node starts to work. After the initialization phase, the second computing node may begin entering the run phase. The run phase refers to a phase in which the second computing node can provide services such as data reading and writing. The second computing node may be capable of concurrent processingThe number of queues determines the size of the queue information storage space. For example, if the second computing node can process S queues simultaneously, and the queue information of each queue in the S queues needs a storage space of T bits for storage (assuming that T is a positive integer greater than or equal to 1), the second computing node may determine that the size of the queue information storage space is K × R bits. The second computing node can store the size of the information in the storage space 4, the storage space 5 and the storage space 6 as required, and divide the queue information storage space into three different types of storage spaces. It is also assumed that the second compute node can process K queues simultaneously. Assuming that the size of the information to be stored in the storage space 4 is T in each queue information 1bit, the size of the information to be stored in the storage space 5 is T2bit, the size of the information to be stored in the storage space 6 is T3bit, then the size of the storage space 4 can be determined to be S × T1bit, the size of the storage space 5 is S x T2bit, the size of the storage space 6 is S x T3bit,T1、T2And T3The sum of (1) is T.

At step 202, the first computing node notifies the first network device to process the send queue.

When a new instruction is stored in the send queue, the write pointer position of the send queue changes, and the first computing node may notify the first network device of the existence of the pending send queue through a doorbell (doorbell) mechanism. The first computing node may store data in a preset format in a register or a storage space pre-agreed with the first network device, and when the first network device detects that the content stored in the pre-agreed register or the storage space changes, the first network device reads the data in the preset format from the pre-agreed register or the storage space. That is, the doorbell mechanism may utilize a predetermined register or memory space to store data in a predetermined format. For example, the doorbell mechanism is implemented by a register, and each queue may have a corresponding queue identifier, such as a Queue Number (QN) or a queue name. The first computing node may write a queue identification corresponding to the queue (i.e., the send queue determined in step 201) into a register. After detecting the doorbell, the first network device may read the queue identifier in the register, and record the read queue identifier. Optionally, after the first network device reads the queue identifier in the register and records the read queue identifier, the first computing node is notified that the queue identifier stored in the register can be deleted. And after the first computing node acquires the notification, deleting the queue identification stored in the register.

Alternatively, the queue identification may be saved to the register based on a first-in-first-out mechanism. Thus, after the queue identification is read, the queue identification is deleted from the register.

Optionally, the first computing node may also directly send a message carrying the queue identifier to the first network device. In this way, the first network device may determine that a queue corresponding to the queue identification in the message needs to be processed in case the message is received.

Similarly, the second computing node may also notify the second network device to process the receive queue. The implementation manner of the second computing node informing the second network device of processing the receive queue is the same as the implementation manner of the first computing node informing the first network device of processing the transmit queue. For brevity, no further description is provided herein.

Step 203, the first network device obtains the sending queue information according to the queue identifier.

The first network device may determine that a transmit queue needs to be processed and may obtain a queue identification for the transmit queue, via step 202. As described above, a plurality of queues exist between the first computing node and the first network device, each queue may have a corresponding queue identifier, and each queue also corresponds to a queue information. Therefore, the first network device may determine, according to the obtained queue identifier, the transmission queue information indicated by the queue identifier from the queue information storage space.

Optionally, the first network device may be configured to set a buffer for the queue information. The buffering of the queue information may be implemented using RAM. The buffer of queue information includes at least one queue information. In some embodiments, the first network device may first determine, according to the queue identifier, whether the transmission queue information indicated by the queue identifier is included in the buffer of the queue information. If the buffer of the queue information includes the sending queue information, the sending queue information can be directly obtained from the buffer of the queue information. If the buffer of the queue information does not include the sending queue information, the first network device may obtain the sending queue information from the queue information storage space.

In the case that the first network device includes the buffer of the queue information, if the transmission queue information is obtained from the queue information storage space, the first network device may further store the transmission queue information in the buffer of the queue information.

Optionally, the first network device may first determine whether there is sufficient storage space in the buffer for the queue information to store the sending queue information. If there is enough storage space in the buffer of the queue information for storing the sending queue information, the sending queue information can be directly stored in the buffer of the queue information. If there is not enough storage space in the buffer of the queue information for storing the sending queue information, one or more queue information stored in the buffer of the queue information may be deleted according to a preset rule, and then the sending queue information is stored in the buffer of the queue information. For example, the preset rule may be to delete the queue information that was written earliest to the buffer of the queue information. For another example, the preset rule may be to delete a specified number of queue information written into the buffer of the queue information at the earliest. For another example, the predetermined rule may be to delete queue information written to the buffer of queue information by a specified time. For another example, the preset rule may also be to randomly delete a specified number of queue information in the buffer of the queue information.

As described above, the first computing node may directly update the write pointer in the send queue information. The first computing node, after updating the write pointer in the send queue information, may notify the first network device of the updated write pointer. Thus, in some possible implementations, the first network device may also obtain a write pointer. For example, when the first network device acquires the queue identifier, it may read the write pointer from the storage space 3 in the queue information storage space directly according to the queue identifier. For another example, in the case of using a doorbell mechanism to transfer the queue identifier, the doorbell may carry the one write pointer in addition to the queue identifier. In another example, the computing node may send the write pointer to the first network device via a dedicated message. In order to distinguish the write pointer from the write pointer in the transmission queue information, the write pointer acquired by the first network device through a doorbell or a dedicated message is hereinafter referred to as write pointer 1, and the write pointer acquired by the first network device from the transmission queue information is hereinafter referred to as write pointer 2.

When the first network device acquires the write pointer 1, the first network device may process the transmission queue according to the write pointer 2, the write pointer 1, and the transmission queue information. The network device may compare write pointer 2 with write pointer 1, determine a newer write pointer of the two write pointers, and process the transmit queue based on the newer write pointer and the transmit queue information. If the two write pointers are the same, the transmission queue may be processed according to the acquired transmission queue information.

Optionally, in some possible implementations, before WQEs are not recycled in the queue, the latest write pointer may be determined by comparing the distances of write pointer 2 and read pointer, and the distances of write pointer 1 and read pointer. Wherein, the un-recycling of WQE in the queue means that any one unit in the queue is not used for storing two different instructions or data. For example, as shown in FIG. 1, 4 WQEs are included in issue queue 1, a WQE not being recycled means that compute node 110 does not store two different instructions or instruction data with either element.

Specifically, in the case where the transmission queue information is determined from the buffer of the queue information, one of the write pointer 2 and the write pointer 1 which is farther from the read pointer (the read pointer here refers to the read pointer in the transmission queue information) is determined; if the write pointer 2 is farther from the read pointer (i.e. the distance between the write pointer 2 and the read pointer (hereinafter referred to as "second distance") is greater than the distance between the write pointer 1 and the read pointer (hereinafter referred to as "first distance")), it indicates that the write pointer 2 is a newer write pointer; if the write pointer 1 is farther from the read pointer (i.e. the first distance is greater than the second distance), it indicates that the write pointer 1 is a newer write pointer. Therefore, if the write pointer 2 is farther from the read pointer, it can be determined that the write pointer 2 is the newest write pointer; if the write pointer 1 is farther from the read pointer, it can be determined that the write pointer 1 is the newest write pointer. .

In some possible implementations, the distance of the write pointer 2 from the read pointer may refer to the number of units of space between the unit indicated by the write pointer 2 and the unit indicated by the read pointer. Similarly, the distance of the write pointer 1 from the read pointer may refer to the number of units spaced between the unit indicated by the write pointer 1 and the unit indicated by the read pointer. Assume that the number of cells spaced between the cell indicated by the write pointer 1 and the cell indicated by the read pointer is M1The number of units spaced between the unit indicated by the write pointer 2 and the unit indicated by the read pointer is M2. If M is2Greater than M1Then, it means that the second distance is greater than the first distance; if M is2Less than M1Then it means that the second distance is less than the first distance; if M is1Is equal to M2It means that the first distance is the same as the second distance.

Optionally, in other possible implementations, the write pointer may have a corresponding time information. Optionally, in other possible implementations, the time information of the write pointer may be a time for saving the data to the unit indicated by the write pointer. Optionally, in other possible implementations, the time information of the write pointer may be a time when the compute node starts to store data using the location of the WQE corresponding to the write pointer. In this case, the latest write pointer can be determined from the time information of the write pointer 2 and the time information of the write pointer 1. If the time information of the write pointer 2 is later than that of the write pointer 1, the write pointer 2 is the latest write pointer; if the time information of the write pointer 1 is later than that of the write pointer 2, the write pointer 1 is the latest write pointer; if the time information of write pointer 2 is the same as the time information of write pointer 1, the two write pointers are the same. For example, the time information of the write pointer 2 may be 38 minutes 59 seconds at 3 months, 1 day, 19 hours in 2019; the time information for write pointer 1 may be 3 months, 1 day, 19 hours, 40 minutes, 3 seconds 2019. It can be seen that the time indicated by the time information of the write pointer 1 is later than the time indicated by the time information of the write pointer 2, and in this case, it can be determined that the write pointer 1 is the newest write pointer.

If the write pointer 1 is the latest write pointer, the write pointer 2 in the transmission queue information may be replaced with the write pointer 1 to obtain updated transmission queue information, and the transmission queue may be processed according to the updated transmission queue information.

Optionally, the first network device may further update the transmission queue information in the buffer of the queue information, when determining the updated transmission queue information. The first network device may delete the transmission queue information in the buffer of the queue information, and store the updated transmission queue information in the buffer of the queue related information. Alternatively, the first network device may directly replace the write pointer 2 in the transmission queue information with the write pointer 1.

As described above, the first computing node may directly update the write pointer in the send queue information. Thus, if the first computing node updates the write pointer in the send queue information, the first computing node may again notify the first network device to process the send queue (i.e., again perform step 202). In this way, the first network device may acquire the transmission queue information again (i.e., perform step 203 again). At this time, the write pointer in the send queue information is the updated write pointer. In other words, the first computing node performs step 202 once as long as the first computing node updates the write pointer, and correspondingly, the first network device performs step 203 once. Therefore, the unit indicated by the write pointer acquired by the first network device is the unit in which the first computing node stores the data most recently. The occurrence of the situation that the unit indicated by the write pointer acquired by the first network device is inconsistent with the unit of the storage data newly generated by the first computing node can be avoided.

Correspondingly, the second network device obtains the receiving queue information. The implementation manner of the second network device obtaining the receiving queue information is similar to the implementation manner of the first network device obtaining the sending queue information, and thus, the description is not repeated here.

And step 204, the first network equipment executes a sending command according to the sending queue information, and sends the data stored in the WQE in the sending queue to the second network equipment. Correspondingly, the second network device executes the receiving command according to the receiving queue information, and stores the received data from the first network device to the second computing node.

Also taking fig. 1 as an example, assume that a first compute node needs to send data 1, data 2, data 3, and data 4 to a second compute node. Each of data 1 through data 4 is stored in one WQE. In other words, four WQEs are required, each for storing one of data 1 through data 4. As described above, if inline mode is employed, then it is data that is stored in the WQE. If the non-inline mode is adopted, the storage location information of the data is stored in the WQE.

As shown in FIG. 1, each of WQEs 11-14 corresponds to one element, and a total of four WQEs are included in a send queue 1 created by a first compute node. Thus, each of data 1 through data 4 may be stored in one of the four WQEs. Data 1 is stored in WQE11 of Send queue 1, data 2 is stored in WQE 12 of Send queue 1, data 3 is stored in WQE 13 of Send queue 1, and data 4 is stored in Send queue WQE 14. Suppose data 1 through data 4 are stored in sequence into WQEs 11 through 14, in other words, WQE11 is the WQE for the first written data, and WQE 14 is the WQE for the last written data. The read pointer in the send queue information points to the location corresponding to WQE11, location 11, and the write pointer points to the location corresponding to WQE 14, location 14. The first network device may read the send queue information, obtain WQE11 in the send queue according to the read pointer in the send queue information, and send data 1 corresponding to WQE11 to the second network device.

Correspondingly, the receive queue created by the second compute node also includes four WQEs, WQE21, WQE 22, WQE 23, and WQE 24. WQEs 21-24 in the receive queue point to four different storage locations in the second compute node memory, respectively.

And the second network equipment reads the receiving queue information corresponding to the receiving queue, acquires the WQE21 in the receiving queue according to the read pointer in the receiving queue information, and stores the received data 1 to the storage position in the memory of the second computing node pointed by the WQE21 in the receiving queue. After the first computing node finishes processing the data corresponding to the WQE11, continuously acquiring the WQE 12 in the sending queue, sending the data 2 corresponding to the WQE 12 to the second network equipment, and storing the received data to the storage position in the memory of the second computing node pointed by the WQE 22 in the receiving queue by the second network equipment, and so on.

Assume that in addition to data 1 through data 4, the first compute node needs to send data 5 to the second compute node. Like data 1 through data 4, data 5 also needs to be stored in one WQE. However, WQEs 11 through WQE 14 in Send queue 1 as shown in FIG. 1 have been used to store data 1 through data 4. In this case, there are no WQEs already available in the Send queue. In this case, the first compute node may cycle through WQEs or expand send queues to send data 5 to the second compute node.

In some possible implementations, WQEs in the send queue may be recycled. Specifically, after WQE1 saved data 1 is sent to the second compute node, the first network device may continue to process WQE 12, WQE13, and WQE14 in order. In addition, after WQE 11 saved data 1 is sent to a second compute node, the first compute node may clear data 1 stored in WQE 11 and store data 5 into WQE 11. The first network device may continue to process WQE1 after processing WQE 4. In this way, the first network device may send data 5 to the second network device. Furthermore, after data 5 is stored in WQE 11, WQE 11 becomes the last WQE to which the data was written. In this case, the element indicated by the write pointer in the send queue information may be updated to the element corresponding to WQE 11, i.e., element 1. The first compute node may directly update the write pointer in memory space 3, modifying the unit pointed to by the write pointer in memory space 3 from unit 4 to unit 1.

In other possible implementations, the first compute node may expand send queue 1, add a new WQE 15, and store data 5 in WQE 15. In this way, a first network device may process WQE 11, WQE 12, WQE13, WQE14, and WQE 15 in sequence, thereby sending data 1 through data 5 to a second compute node. As with the recycle WQE, after data 5 is stored to WQE 15, WQE 15 becomes the last WQE to which the data was written. In this case, the element indicated by the write pointer in the send queue information may be updated to the element corresponding to WQE 15. The first computing node may directly update the write pointer in the storage space 3, and modify the unit pointed by the write pointer in the storage space 3 from the unit 4 to the unit corresponding to the WQE 15.

Correspondingly, the second computing node may also store the data 5 in the second computing node by recycling the WQEs or expanding the receiving queues.

In some possible implementations, WQEs in receive queue 1 may be recycled. Specifically, after data 1 is saved to the storage location pointed to by WQE 21, the second network device may continue to save received data 2 to the storage location pointed to by WQE22, data 3 to the storage location pointed to by WQE 23, and data 4 to the storage location pointed to by WQE24, in that order. In addition, after data 1 is saved to the storage location pointed to by WQE 21, the second compute node may clear the storage location stored in WQE 21 and store a new storage location in WQE 21, which is different from the storage locations pointed to by the original WQE 21 and WQEs 22-24. The second network device may save the received new data (i.e., data 5) to the storage location pointed to by WQE 21, while saving data 4 to the storage location pointed to by WQE 24. In this way, the second network device can save the data 5 from the first computing node to the second computing node. Furthermore, after updating the storage location pointed to by WQE 21, WQE 21 becomes the last WQE to which data was written. In this case, the location indicated by the write pointer in the receive queue information may be updated to the location corresponding to WQE 21, namely location 1. The second compute node may directly update the write pointer in memory space 6, modifying the location pointed to by the write pointer in memory space 6 from location 4 to location 1.

In other possible implementations, the second compute node may expand receive queue 1 and add a new WQE 25. WQE 25 points to a storage location in the second compute node memory that is different from the storage location to which any of WQEs 21 through WQE24 points. In this way, the second network device may store the received data 1-5 to the storage locations pointed to by WQE 11, WQE 22, WQE 23, WQE24, and WQE 25 in that order, thereby storing data 1-5 to the second computing node. As with the recycle WQE, after adding a new WQE 25, WQE 25 becomes the last WQE to which data was written. In this case, the element indicated by the write pointer in the receive queue information may be updated to the element to which WQE 25 corresponds. The second compute node may directly update the write pointer in the storage space 6, and modify the location pointed by the write pointer in the storage space 6 from the location 4 to the location corresponding to the WQE 25.

In summary, assume that N is included in the transmit queue and the receive queue respectively in the initial state (i.e. the queue pair has just been created)1WQE (assume N)1A positive integer greater than or equal to 1), the number of available WQEs in the send and receive queues is reduced by 1 for each WQE used (i.e., save data to WQEs or save information to memory locations to WQEs). If there are no available WQEs in the send and receive queues, the number of available WQEs in the send and receive queues is reduced to 0. At this time, new WQEs may be added to the send queue and the receive queue in a capacity expansion manner, or data in the processed WQEs may be cleared, and the storage space corresponding to the WQEs may be reused to store information, thereby cyclically utilizing the storage resources.

Adding a unit or WQE to a queue in the above embodiments may be understood as adding a new unit or WQE to a send queue by means of capacity expansion, or may be understood as reusing an existing unit or WQE by means of recycling storage resources.

The various steps performed by the first computing node in the method of FIG. 2 may be implemented by an application program running above a hardware layer of the first computing node. In particular, the application program running on the first computing node processor, which, when implementing the steps of the above method, can directly access the storage device of the first computing node by bypassing the operating system running above the hardware layer of the first computing node. The second network device of the corresponding second computing node may also directly write the received data to the storage of the second computing node. In other words, the application software running with the first computing node may be used to directly transfer data from the memory of the first computing node directly to the memory of the second computing node. And the intervention of operating systems of both parties is not needed in the transmission process.

In summary, in the technical solution provided in the embodiment of the present application, the compute node may directly transmit the write pointer updated at the current time to the network device, so as to avoid a problem of data processing errors of the network device due to inconsistency between the write pointers stored in the network device and the compute node in the conventional technology. In the data processing, after the position of the write pointer is updated by the computing node each time, the write pointer is directly transmitted to the network equipment, so that the operation that the network equipment can acquire the pointer only by two times of read operations (the first time of the read operation acquires the physical position of the write pointer stored in the computing node, and the second time of the read operation reads the write pointer according to the physical position) in the traditional technology is avoided, the number of the read operations is reduced, the system processing capacity is reduced, and the system processing efficiency is improved. In addition, in the conventional technology, two storage spaces are required to store one write pointer, one to store the address of the write pointer, and the other to store the write pointer. The method provided by the embodiment of the application can save the write pointer by using only one storage space. Thus, memory space can be saved.

The method for processing data provided by the embodiment of the present application is described in detail above with reference to fig. 1 to 2, and the apparatus and system for processing data provided by the embodiment of the present application are described below with reference to fig. 3 to 6.

Fig. 3 is a block diagram of a network device according to an embodiment of the present application. As shown in fig. 3, the network device 300 includes an acquisition unit 301 and a processing unit 302.

And the acquisition unit is used for reading the pointer information of the first queue from the queue information storage space of the computing node according to the identifier of the first queue. The computing node is a computing node connected to network device 300. Network device 300 is used to enable communication between the computing node and other computing nodes. The queue information storage space is arranged in the memory of the computing node.

For example, in some possible implementations, network device 300 may be network device 111 in fig. 1, and the computing node may be computing node 110 in fig. 1. The queue information storage space may be provided in the storage device 112. In this case, the first queue may be a transmit queue in the method of fig. 2. The pointer information of the first queue may be the transmit queue pointer information in the method of fig. 2.

Alternatively, network device 300 may be network device 121 in fig. 1, and the computing node may be computing node 120 in fig. 1. The queue information storage space may be provided in the storage device 122. In this case, the first queue may be the receive queue in the method of fig. 2. The pointer information of the first queue may be the receive queue pointer information in the method of fig. 2.

It will be appreciated that the first queue may be other forms of queues, such as a completion queue, a commit queue, etc., in addition to a receive queue or a transmit queue.

A processing unit 302, configured to obtain data to be processed in the first queue according to the read pointer in the pointer information, and process the data to be processed;

the processing unit 302 is further configured to update the location of the cell indicated by the read pointer according to the first write pointer.

The acquisition unit 301 may be implemented by a receiver. The processing unit 302 may be implemented by a processor. The specific functions and advantages of the receiving unit 301 and the processing unit 302 can be referred to as the first network device or the second network device in the method shown in fig. 2.

It should be understood that the network device 300 according to the embodiment of the present invention may be implemented by an application-specific integrated circuit (ASIC), or a Programmable Logic Device (PLD), which may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof. When the data processing method shown in fig. 2 can also be implemented by software, the network device 300 and its modules may also be software modules.

The network device 300 according to the embodiment of the present invention may correspond to perform the method described in the embodiment of the present invention, and the above and other operations and/or functions of each unit in the network device 300 are respectively for implementing corresponding flows of each method in fig. 2, and are not described herein again for brevity.

Fig. 4 is a block diagram of a network device according to an embodiment of the present application. The network device 400 shown in fig. 4 includes: a processor 401, a memory 402, and a communication interface 403, the processor 401, the memory 402, and the communication interface 403 communicating via a bus 404.

The processor 401, the memory 402 and the communication interface 403 communicate with each other via internal connection paths to transfer control and/or data signals.

The method disclosed in the above embodiments of the present invention may be applied to the processor 401, or implemented by the processor 401. The processor 401 may be a Central Processing Unit (CPU), or may be other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like. A general purpose processor may be a microprocessor or any conventional processor or the like. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 401. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in memory 402, where memory 402 may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (ddr SDRAM), Enhanced SDRAM (ESDRAM), syncretive DRAM (SLDRAM), and direct bus RAM (DR RAM) processor 401 reads instructions from memory 402, and in combination with its hardware, performs the above steps.

Alternatively, the memory 402 may store instructions for performing the method performed by the first network device in the method shown in fig. 2. The processor 401 may execute the instructions stored in the memory 402 to perform the steps of the first network device in the method shown in fig. 2 in combination with other hardware (e.g., the transceiver 403), and the specific operation and beneficial effects may refer to the description in the embodiment shown in fig. 2.

Alternatively, the memory 402 may store instructions for performing the method performed by the second network device in the method shown in fig. 2. The processor 401 may execute the instructions stored in the memory 402 to perform the steps of the second network device in the method shown in fig. 4 in combination with other hardware (e.g., the transceiver 403), and the specific operation and beneficial effects can be referred to the description in the embodiment shown in fig. 2.

The bus 404 may include a power bus, a control bus, a status signal bus, and the like, in addition to a data bus. But for clarity of illustration the various busses are labeled in the figures as bus 404.

It should be understood that the network device 400 according to the embodiment of the present invention may correspond to the network device 300 in the embodiment of the present invention, and may correspond to the first network device or the second network device which executes the method shown in fig. 2 according to the embodiment of the present invention as an execution subject, and the above-mentioned and other operations and/or functions of each module in the network device 400 are respectively for implementing corresponding flows of each method in fig. 2, and are not described herein again for brevity.

Fig. 5 is a block diagram of a computing node according to an embodiment of the present application. As shown in fig. 5, the computing node 500 includes a processing unit 501, a storage unit 502, and a transmission unit 503.

The processing unit 501 is configured to determine a queue information storage space and a first storage space in the queue information storage space according to the number of queues to be processed and the size of the queue information. The computing node 500 is connected to a network device. The network device is used to enable communication between computing node 500 and other computing nodes. The queue information storage space is provided in the storage unit 502.

For example, in some possible implementations, the network device may be network device 111 in fig. 1, and computing node 500 may be computing node 110 in fig. 1. The queue information storage space may be provided in the storage device 112. Storage device 112 may be a memory unit 502.

As another example, in another possible implementation, the network device may be network device 121 in fig. 1, and computing node 500 may be computing node 120 in fig. 1. The queue information storage space may be provided in the storage device 122. Storage device 122 may be a memory unit 502.

The processing unit 501 is further configured to store a write pointer of a first queue to the first storage space, where the first queue is any one of the queues, and the write pointer of the first queue is used to indicate a location of a last unit in the first queue, where the last unit is allowed to be stored by the computing node at the current time.

A sending unit 503, configured to send the identifier of the first queue to the network device.

The processing unit 501 may be implemented by a processor. The storage unit 502 may be implemented by a memory. The transmitting unit 503 may be implemented by a transmitter. Specific functions and beneficial effects of the processing unit 501, the storage unit 502, and the sending unit 503 can refer to the first computing node or the second computing node in the method shown in fig. 2, and are not described herein again.

It should be understood that the compute node 500 of the embodiment of the present invention may be implemented by an application-specific integrated circuit (ASIC), or a Programmable Logic Device (PLD), which may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof. When the data processing method shown in fig. 2 can also be implemented by software, the computing node 500 and its modules may also be software modules.

The computing node 500 according to the embodiment of the present invention may correspond to perform the method described in the embodiment of the present invention, and the above and other operations and/or functions of each unit in the computing node 500 are respectively for implementing corresponding flows of each method in fig. 2, and are not described herein again for brevity.

Fig. 6 is a block diagram of a computing node provided in accordance with an embodiment of the present invention. The computing node 600 shown in fig. 6 includes: a processor 601, the processor 601 coupled to one or more data storage devices. The data storage device may include a storage medium 602 and a memory unit 604. The storage medium 602 may be read-only, such as read-only memory (ROM), or read/writable, such as a hard disk or flash memory. Memory unit 604 may be a Random Access Memory (RAM). Memory unit 604 may be physically integrated within processor 601 or may be implemented in a separate unit or units. The storage medium 602, the communication interface 603, and the memory unit 604, and the processor 601, the storage medium 602, the communication interface 603, and the memory unit 604 communicate via a bus 605.

Processor 601 is the control center of computing node 600 and provides sequencing and processing facilities to execute instructions, perform interrupt actions, provide timing functions, and other functions. Optionally, processor 601 includes one or more Central Processing Units (CPUs). Such as CPU 0 and CPU 1 shown in fig. 6. Optionally, the computing node 600 comprises a plurality of processors. Processor 601 may be a single core (single CPU) processor or a multi-core (multi-CPU) processor, and unless otherwise indicated, components such as a processor or memory for performing tasks may be implemented as temporarily configured general purpose components for performing tasks at a given time or as manufacturing specific components for performing tasks, as the term "processor" is used herein to refer to one or more devices or circuits. The processor 601 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or any conventional processor or the like.

Program codes executed by the CPU of the processor 601 may be stored in the memory unit 604 or the storage medium 602. Alternatively, program code (e.g., kernels, programs to be debugged) is stored in storage medium 602, copied into memory unit 604 for execution by processor 601. The processor 601 may execute at least one kernel (e.g., the kernel is sold in LINUX by others)TM、UNIXTM、windowsTM、ANDROIDTMKernel in an operating system such as IOS). The processor 601 controls the execution of other programs or processes, controls communication with peripheral devices, controls the use of data processing device resources, and thus controls the operation of the compute node 600, therebyThe operational steps of the method illustrated in fig. 2 described above are implemented.

Computing node 600 also includes communication interface 603 for communicating with other devices or systems, either directly or through an external network. Optionally, computing node 600 further includes output devices and input devices (not shown in fig. 6). An output device is coupled to processor 601 and is capable of displaying information in one or more ways. One example of an output device is a visual display device such as a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, a Cathode Ray Tube (CRT), or a projector. The input device is also connected to the processor 101. Can receive input from a user of computing node 600 or more. Examples of input devices include a mouse, a keyboard, a touch screen device, or a sensing device, among others.

The bus 605 may include a power bus, a control bus, a status signal bus, and the like, in addition to a data bus. But for clarity of illustration the various busses are labeled in the figures as bus 604.

The computing node 600 may communicate with the network device 400 shown in fig. 4 via the communication interface 603 to implement the operational steps of the method shown in fig. 2 described above. Optionally, the network device 400 shown in fig. 4 may also access the computing node 600 through the bus 605, for example, when the network device 400 is implemented, the network device may access the computing node in the form of a PCIe slot (not shown in fig. 6) provided in the computing node 600, and communicate with each element of the computing node through the bus 605.

It should be understood that the computing node 600 for data processing according to the embodiment of the present invention may correspond to the computing node 400 for fractional data processing in the embodiment of the present invention, and may correspond to a corresponding main body in executing the method shown in fig. 2 according to the embodiment of the present invention, and the above and other operations and/or functions of each module in the computing node 600 are respectively for implementing corresponding flows of each method in fig. 2, and are not described herein again for brevity.

An embodiment of the present application further provides a system, which includes the network device shown in fig. 4 and the computing node shown in fig. 6, where the network device and the computing node are configured to implement the method flow shown in fig. 2, and details are not described herein for brevity.

The embodiment of the application also provides a chip, which comprises a transceiver unit and a processing unit. The transceiver unit can be an input/output circuit and a communication interface; the processing unit is a processor or a microprocessor or an integrated circuit integrated on the chip. The chip may perform the method of the first network device in the above method embodiments.

The embodiment of the application also provides a chip, which comprises a transceiver unit and a processing unit. The transceiver unit can be an input/output circuit and a communication interface; the processing unit is a processor or a microprocessor or an integrated circuit integrated on the chip. The chip may perform the method of the second network device in the above method embodiments.

The embodiment of the application also provides a chip, which comprises a transceiver unit and a processing unit. The transceiver unit can be an input/output circuit and a communication interface; the processing unit is a processor or a microprocessor or an integrated circuit integrated on the chip. The chip may perform the method performed by the first computing node in the above embodiments.

The embodiment of the application also provides a chip, which comprises a transceiver unit and a processing unit. The transceiver unit can be an input/output circuit and a communication interface; the processing unit is a processor or a microprocessor or an integrated circuit integrated on the chip. The chip may perform the method performed by the second computing node in the above embodiments.

The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded or executed on a computer, cause the flow or functions according to embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more collections of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a Solid State Drive (SSD).

Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.

The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

29页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种资源分配的方法、终端、服务器及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!