Data transmission method, device and computer storage medium

文档序号:1314622 发布日期:2020-07-10 浏览:6次 中文

阅读说明:本技术 数据传输方法、装置及计算机存储介质 (Data transmission method, device and computer storage medium ) 是由 唐盛武 黄健 杨裕焱 于 2019-12-11 设计创作,主要内容包括:本申请公开了一种数据传输方法、装置及计算机存储介质,属于通信技术领域。所述方法包括:接收端中配置有接收队列,接收队列包括N个内存块,一个内存块用于缓存一个数据单元,N为大于1的正整数。也即是,在本申请实施例中,在接收端配置一个固定容量的接收队列,当接收端接收到发送端发送的一个或多个数据单元时,将每个数据单元缓存在接收队列中的一个内存块中,并由接收端及时向发送端发送通知消息,以便于发送端能够根据空闲内存块的数量来发送数据单元,无需发送端每次传输数据之前先向接收端发送内存申请请求,从而避免了数据传输过程中接收端与发送端之间频繁的交互。(The application discloses a data transmission method, a data transmission device and a computer storage medium, and belongs to the technical field of communication. The method comprises the following steps: a receiving queue is configured in the receiving end, the receiving queue includes N memory blocks, one memory block is used for caching one data unit, and N is a positive integer greater than 1. That is, in this embodiment of the present application, a receiving queue with a fixed capacity is configured at a receiving end, and when the receiving end receives one or more data units sent by a sending end, each data unit is cached in one memory block in the receiving queue, and the receiving end sends a notification message to the sending end in time, so that the sending end can send the data units according to the number of idle memory blocks, and it is not necessary that the sending end sends a memory application request to the receiving end before transmitting data each time, thereby avoiding frequent interaction between the receiving end and the sending end in a data transmission process.)

1. A data transmission method is characterized in that the method is applied to a receiving end, wherein the receiving end is configured with a receiving queue, the receiving queue comprises N memory blocks, one memory block is used for caching one data unit, and N is a positive integer greater than 1; the method comprises the following steps:

receiving one or more data units sent by a sending end, wherein the number of the one or more data units is less than the N;

caching the one or more data units in a memory block included in the receiving queue;

and sending a notification message for indicating the number of the idle memory blocks in the receiving queue to a sending end, so that the sending end continues to send data units according to the number of the idle memory blocks, where the idle memory blocks refer to memory blocks without cached data units.

2. The method according to claim 1, wherein the sending a notification message to a sender for indicating the number of free memory blocks in the receive queue includes:

if the fact that the memory blocks in the receiving queue are moved out of the receiving queue is detected, adding idle memory blocks in the receiving queue, wherein the number of the added idle memory blocks is the same as the number of the moved-out memory blocks;

and sending the notification message to the sending end.

3. The method according to claim 2, wherein a counter is deployed on the receiving end, and the counter is used for recording the total number of memory chunks removed from the receiving queue between the time of creating the receiving queue and the current time;

the detecting that there is a memory block in the receive queue that is moved out of the receive queue includes:

if the number recorded by the counter changes, determining that the memory block in the receiving queue is removed from the receiving queue;

the sending the notification message to the sender includes:

and sending a notification message carrying the number of the counter records to the sending end, so that the sending end determines the number of the idle memory blocks in the receiving queue according to the number of the counter records.

4. The method of claim 1, wherein the method further comprises:

receiving a negotiation message sent by the sending end, where the negotiation message carries the size of the memory blocks to be created and the number of the memory blocks to be created;

and creating the receive queue according to the negotiation message, where the number of the memory blocks included in the created receive queue is equal to the number of the memory blocks to be created in the negotiation message, and the size of the memory blocks included in the created receive queue is equal to the size of the memory blocks to be created in the negotiation message.

5. The method of any of claims 1 to 4, wherein the sender transmits data to the receiver based on remote direct memory access, RDMA, mode;

the receiving one or more data units sent by the sending end includes:

receiving the one or more data units over a data link between the receiving end and the transmitting end;

the sending, to the sending end, a notification message for indicating the number of the free memory blocks in the receive queue includes:

and sending the notification message through a control link between the receiving end and the sending end.

6. A data transmission method, applied to a transmitting end, the method comprising:

acquiring a notification message sent by a receiving end, where the notification message is used to indicate the number of idle memory blocks in a receiving queue configured in the receiving end, the receiving queue includes N memory blocks, one memory block is used to cache one data unit sent by the sending end, N is a positive integer greater than 1, and the idle memory block is a memory block that does not cache the data unit;

determining the number of the idle memory blocks according to the notification message;

determining a target number according to the number of the idle memory blocks and the number of data units to be sent;

and sending the target number of data units to the receiving end.

7. The method according to claim 6, wherein the determining a target number according to the number of free memory blocks and the number of data units to be sent comprises:

and if the number of the idle memory blocks is greater than or equal to the number of the data units to be sent, determining the number of the data units to be sent as the target number.

8. The method according to claim 6, wherein the determining a target number according to the number of free memory blocks and the number of data units to be sent comprises:

and if the number of the idle memory blocks is smaller than the number of the data units to be sent, determining the number of the idle memory blocks as the target number.

9. The method according to claim 6, wherein a counter is deployed on the receiving end, the counter is used to record the total number of memory chunks removed from the receiving queue between the time when the receiving queue is created and the current time, and the notification message carries the number recorded by the counter;

the determining, according to the notification message, the number of the idle memory blocks includes:

determining a total number of data units sent by the transmitting end between creation of the receive queue and a current time;

and determining the number of the idle memory blocks according to the total number of the data units sent by the sending end, the N and the number recorded by the counter.

10. The method of any of claims 6 to 9, further comprising:

and sending a negotiation message to the receiving end, wherein the negotiation message carries the size of the data unit and the total number of the data units used by the sending end when the sending end transmits data, so that the receiving end creates the receiving queue according to the negotiation message.

11. A data transmission device is characterized in that the data transmission device is applied to a receiving end, wherein a receiving queue is configured in the receiving end, the receiving queue comprises N memory blocks, one memory block is used for caching one data unit, and N is a positive integer greater than 1; the device comprises:

a first receiving module, configured to receive one or more data units sent by a sending end, where the number of the one or more data units is less than N;

a cache module, configured to cache the one or more data units in a memory block included in the receive queue;

a sending module, configured to send, to a sending end, a notification message used to indicate the number of idle memory blocks in the receive queue, so that the sending end continues to send data units according to the number of idle memory blocks, where an idle memory block refers to a memory block that does not cache data units.

12. A data transmission apparatus, applied to a transmitting end, the apparatus comprising:

an obtaining module, configured to obtain a notification message sent by a receiving end, where the notification message is used to indicate the number of idle memory blocks in a receiving queue configured in the receiving end, the receiving queue includes N memory blocks, one memory block is used to cache one data unit sent by the sending end, where N is a positive integer greater than 1, and the idle memory block is a memory block that does not cache a data unit;

a first determining module, configured to determine, according to the notification message, the number of the idle memory blocks;

a second determining module, configured to determine a target number according to the number of the idle memory blocks and the number of the data units to be sent;

and the first sending module is used for sending the target number of data units to the receiving end.

13. A computer-readable storage medium having stored thereon instructions which, when executed by a processor, carry out the steps of the method of any of claims 1 to 10.

Technical Field

The present application relates to the field of communications technologies, and in particular, to a data transmission method, an apparatus, and a computer storage medium.

Background

With the development of communication technology, RDMA (remote direct memory access) technology can be used for data transmission between different computers, and the technology can realize that data is directly transmitted from the memory of one computer to the memory of another computer without the intervention of a Central Processing Unit (CPU) on the computer, thereby effectively solving the problem of delay in the data transmission process.

Disclosure of Invention

The embodiment of the application provides a data transmission method, which can avoid frequent message interaction between a sending end and a receiving end in the prior art. The technical scheme is as follows:

in a first aspect, a data transmission method is provided, where a receiving end is configured with a receiving queue, where the receiving queue includes N memory blocks, and one memory block is used to cache one data unit, where N is a positive integer greater than 1; the method comprises the following steps:

receiving one or more data units sent by a sending end, wherein the number of the one or more data units is less than the N;

caching the one or more data units in a memory block included in the receiving queue;

and sending a notification message for indicating the number of the idle memory blocks in the receiving queue to a sending end, so that the sending end continues to send data units according to the number of the idle memory blocks, where the idle memory blocks refer to memory blocks without cached data units.

Optionally, the sending, to the sending end, a notification message for indicating the number of free memory blocks in the receive queue includes:

if the fact that the memory blocks in the receiving queue are moved out of the receiving queue is detected, adding idle memory blocks in the receiving queue, wherein the number of the added idle memory blocks is the same as the number of the moved-out memory blocks;

and sending the notification message to the sending end.

Optionally, a counter is disposed on the receiving end, and the counter is configured to record a total number of memory chunks removed from the receiving queue between creation of the receiving queue and current time;

the detecting that there is a memory block in the receive queue that is moved out of the receive queue includes:

if the number recorded by the counter changes, determining that the memory block in the receiving queue is removed from the receiving queue;

the sending the notification message to the sender includes:

and sending a notification message carrying the number of the counter records to the sending end, so that the sending end determines the number of the idle memory blocks in the receiving queue according to the number of the counter records.

Optionally, the method further includes:

receiving a negotiation message sent by the sending end, where the negotiation message carries the size of the memory blocks to be created and the number of the memory blocks to be created;

and creating the receive queue according to the negotiation message, where the number of the memory blocks included in the created receive queue is equal to the number of the memory blocks to be created in the negotiation message, and the size of the memory blocks included in the created receive queue is equal to the size of the memory blocks to be created in the negotiation message.

Optionally, the sending end transmits data to the receiving end based on an RDMA manner;

the receiving one or more data units sent by the sending end includes:

receiving the one or more data units over a data link between the receiving end and the transmitting end;

the sending, to the sending end, a notification message for indicating the number of the free memory blocks in the receive queue includes:

and sending the notification message through a control link between the receiving end and the sending end.

In a second aspect, a data transmission method is provided, which is applied to a sending end, and the method includes:

acquiring a notification message sent by a receiving end, where the notification message is used to indicate the number of idle memory blocks in a receiving queue configured in the receiving end, the receiving queue includes N memory blocks, one memory block is used to cache one data unit sent by the sending end, N is a positive integer greater than 1, and the idle memory block is a memory block that does not cache the data unit;

determining the number of the idle memory blocks according to the notification message;

determining a target number according to the number of the idle memory blocks and the number of data units to be sent;

and sending the target number of data units to the receiving end.

Optionally, the determining the target number according to the number of the idle memory blocks and the number of the data units to be sent includes:

and if the number of the idle memory blocks is greater than or equal to the number of the data units to be sent, determining the number of the data units to be sent as the target number.

Optionally, the determining the target number according to the number of the idle memory blocks and the number of the data units to be sent includes:

and if the number of the idle memory blocks is smaller than the number of the data units to be sent, determining the number of the idle memory blocks as the target number.

Optionally, a counter is disposed on the receiving end, where the counter is used to record the total number of memory blocks removed from the receiving queue between creation of the receiving queue and current time, and the notification message carries the number recorded by the counter;

the determining, according to the notification message, the number of the idle memory blocks includes:

determining a total number of data units sent by the transmitting end between creation of the receive queue and a current time;

and determining the number of the idle memory blocks according to the total number of the data units sent by the sending end, the N and the number recorded by the counter.

Optionally, the method further includes:

and sending a negotiation message to the receiving end, wherein the negotiation message carries the size of the data unit and the total number of the data units used by the sending end when the sending end transmits data, so that the receiving end creates the receiving queue according to the negotiation message.

In a third aspect, a data transmission apparatus is provided, where a receiving queue is configured in a receiving end, where the receiving queue includes N memory blocks, and one memory block is used to cache one data unit, where N is a positive integer greater than 1; the device comprises:

a first receiving module, configured to receive one or more data units sent by a sending end, where the number of the one or more data units is less than N;

a cache module, configured to cache the one or more data units in a memory block included in the receive queue;

a sending module, configured to send, to a sending end, a notification message used to indicate the number of idle memory blocks in the receive queue, so that the sending end continues to send data units according to the number of idle memory blocks, where an idle memory block refers to a memory block that does not cache data units.

Optionally, the sending module includes:

creating a submodule, configured to add idle memory blocks to the receive queue if it is detected that memory blocks exist in the receive queue and are removed from the receive queue, where the number of the added idle memory blocks is the same as the number of the removed memory blocks;

and the first sending submodule is used for sending the notification message to the sending end.

Optionally, a counter is disposed on the receiving end, and the counter is configured to record a total number of memory chunks removed from the receiving queue between creation of the receiving queue and current time;

the creating sub-module is further configured to:

if the number recorded by the counter changes, determining that the memory block in the receiving queue is removed from the receiving queue;

the first sending submodule is further configured to:

and sending a notification message carrying the number of the counter records to the sending end, so that the sending end determines the number of the idle memory blocks in the receiving queue according to the number of the counter records.

Optionally, the apparatus further comprises:

a second receiving module, configured to receive a negotiation message sent by the sending end, where the negotiation message carries the size of the memory block to be created and the number of the memory blocks to be created;

a creating module, configured to create the receive queue according to the negotiation message, where a number of the memory blocks included in the created receive queue is equal to a number of the memory blocks to be created in the negotiation message, and a size of the memory blocks included in the created receive queue is equal to a size of the memory blocks to be created in the negotiation message.

Optionally, the sending end transmits data to the receiving end based on an RDMA manner;

the first receiving module includes:

a receiving submodule, configured to receive the one or more data units through a data link between the receiving end and the transmitting end;

a second sending submodule, configured to send, to the sending end, a notification message used for indicating the number of the free memory blocks in the receive queue, where the notification message includes:

and the third sending submodule is used for sending the notification message through a control link between the receiving end and the sending end.

In a fourth aspect, a data transmission apparatus is provided, which is applied to a sending end, and the apparatus includes:

an obtaining module, configured to obtain a notification message sent by a receiving end, where the notification message is used to indicate the number of idle memory blocks in a receiving queue configured in the receiving end, the receiving queue includes N memory blocks, one memory block is used to cache one data unit sent by the sending end, where N is a positive integer greater than 1, and the idle memory block is a memory block that does not cache a data unit;

a first determining module, configured to determine, according to the notification message, the number of the idle memory blocks;

a second determining module, configured to determine a target number according to the number of the idle memory blocks and the number of the data units to be sent;

and the first sending module is used for sending the target number of data units to the receiving end.

Optionally, the second determining module is configured to:

and if the number of the idle memory blocks is greater than or equal to the number of the data units to be sent, determining the number of the data units to be sent as the target number.

Optionally, the second determining module is further configured to:

and if the number of the idle memory blocks is smaller than the number of the data units to be sent, determining the number of the idle memory blocks as the target number.

Optionally, a counter is disposed on the receiving end, where the counter is used to record the total number of memory blocks removed from the receiving queue between creation of the receiving queue and current time, and the notification message carries the number recorded by the counter;

the first determining module includes:

a first determining submodule, configured to determine a total number of data units sent by the sending end between creation of the receive queue and a current time;

and a second determining submodule, configured to determine the number of the idle memory blocks according to the total number of the data units sent by the sending end, the N, and the number recorded by the counter.

Optionally, the apparatus further comprises:

and a second sending module, configured to send a negotiation message to the receiving end, where the negotiation message carries the size of the data unit and the total number of the data units used by the sending end when transmitting data, so that the receiving end creates the receive queue according to the negotiation message.

In a fifth aspect, a data transmission apparatus is provided, the data transmission apparatus comprising a processor, a communication interface, a memory, and a communication bus;

the processor, the communication interface and the memory complete mutual communication through the communication bus;

the memory is used for storing computer programs;

the processor is used for executing the program stored in the memory so as to realize the method for providing data transmission.

In a sixth aspect, a computer-readable storage medium is provided, in which a computer program is stored, which computer program, when being executed by a processor, realizes the steps of the data transmission method as provided above.

The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:

in this embodiment, a receiving end is configured with a receiving queue, where the receiving queue includes N memory blocks, one memory block is used to cache one data unit, and N is a positive integer greater than 1. That is, in this embodiment of the present application, a receiving queue with a fixed capacity is configured at a receiving end, and when the receiving end receives one or more data units sent by a sending end, each data unit in the one or more data units is cached in one memory block in the receiving queue, and the receiving end sends a notification message to the sending end in time, so that the sending end can send the data units according to the number of idle memory blocks, and it is not necessary for the sending end to send a memory application request to the receiving end before transmitting data each time, thereby avoiding frequent interaction between the receiving end and the sending end in a data transmission process.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.

Fig. 1 is a structural diagram of a data transmission system according to an embodiment of the present application.

Fig. 2 is a flowchart of a data transmission method according to an embodiment of the present application.

Fig. 3 is a flowchart of another data transmission method according to an embodiment of the present application.

Fig. 4 is a flowchart of a data transmission method according to an embodiment of the present application.

Fig. 5 is a schematic diagram of a receive queue according to an embodiment of the present application.

Fig. 6 is a schematic structural diagram of a data transmission device according to an embodiment of the present application.

Fig. 7 is a schematic structural diagram of a data transmission device according to an embodiment of the present application.

Fig. 8 is a schematic structural diagram of a server according to an embodiment of the present application.

Detailed Description

To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.

Before explaining the embodiments of the present application in detail, a system architecture related to the embodiments of the present application will be described.

Fig. 1 is a diagram of a data transmission system architecture according to an embodiment of the present application. As shown in fig. 1. The data transmission system 100 includes one or more clients 101, one or more servers 102,

For any client 101 of the one or more clients 101, the client 101 may be connected to any server 102 of the one or more servers 102 in a wireless or wired manner for communication.

Wherein, data transmission can be performed between the client 101 and the server 102. In the embodiment of the present application, data transmission is performed between the client 101 and the server 102 in an RDMA manner. As shown in fig. 1, a network card (smartnic) is respectively deployed in the server 102 and the client 101. When data transmission is required between the server 102 and the client 101, for example, when the sender 101 needs to send data to the server 102, the network card on the sender 101 may directly send data in the memory of the sender to the network card on the server 102, and the network card on the server 102 stores the received data in the memory of the server 102 without involving processors on the client 101 and the server 102, so that excessive processing resources of a computer do not need to be consumed, and the performance of data transmission between the client 101 and the server 102 is improved.

The data transmission method provided by the embodiment of the application is applied to a scenario where data transmission is performed between the client 101 and the server 102 in an RDMA manner.

The client 101 and the server 102 in fig. 1 may run on different hosts, respectively. The host in the embodiment of the present application may be a computer, a server, or other devices. In addition, fig. 1 illustrates an example of one client and one server, and does not limit the embodiments of the present application.

In addition, the network card in fig. 1 is generally an Infiniband (Infiniband) network card, or an ethernet card supporting an ROCE (RDMA converged ethernet) protocol or iWARP (Internet Wide Area RDMA protocol). The network cards are only used for illustration and do not constitute a limitation to the types of the network cards in the embodiments of the present application.

It should be noted that, in this embodiment of the application, if the data transmission is unidirectional data transmission, the client 101 is a sending end, and the server 102 is a receiving end. In the case of bidirectional data transmission, the client 101 is both a transmitting end and a receiving end, and the server 102 is both a transmitting end and a receiving end. That is, the data transmission method provided in the following embodiments may be applied to a scenario in which the client 101 sends data to the server 102, may also be applied to a scenario in which the server 102 sends data to the client 101, and may also be applied to a scenario in which the client 101 transmits data to the server 102 in both directions. The specific manner in which data is transmitted to the other party from either party can be referred to the following examples, which will not be described in detail herein.

The data transmission method provided in the embodiments of the present application is explained in detail below.

Fig. 2 is a flowchart of a data transmission method provided in an embodiment of the present application, where the method is applied to a receiving end. Referring to fig. 2, the method includes the following steps:

step 201: and the receiving end receives one or more data units sent by the sending end, and the number of the one or more data units is less than N.

Wherein, a receiving end is configured with a receiving queue, the receiving queue includes N memory blocks, one memory block is used for caching one data unit, N is a positive integer greater than 1

Step 202: the receiving end buffers one or more data units in a memory block included in the receiving queue.

Step 203: the receiving end sends a notification message for indicating the number of the idle memory blocks in the receiving queue to the sending end, so that the sending end continues to send the data units according to the number of the idle memory blocks, wherein the idle memory blocks refer to memory blocks without cache data units.

In this embodiment, a receiving end is configured with a receiving queue, where the receiving queue includes N memory blocks, one memory block is used to cache one data unit, and N is a positive integer greater than 1. That is, in this embodiment of the present application, a receiving queue with a fixed capacity is configured at a receiving end, and when the receiving end receives one or more data units sent by a sending end, each data unit in the one or more data units is cached in one memory block in the receiving queue, and the receiving end sends a notification message to the sending end in time, so that the sending end can send the data units according to the number of idle memory blocks, and it is not necessary for the sending end to send a memory application request to the receiving end before transmitting data each time, thereby avoiding frequent interaction between the receiving end and the sending end in a data transmission process.

Fig. 3 is a flowchart of a data transmission method provided in an embodiment of the present application, where the method is applied to a sending end. Referring to fig. 3, the method includes the following steps:

step 301: the method comprises the steps that a sending end obtains a notification message sent by a receiving end, the notification message is used for indicating the number of idle memory blocks in a receiving queue configured in the receiving end, the receiving queue comprises N memory blocks, one memory block is used for caching one data unit sent by the sending end, N is a positive integer larger than 1, and the idle memory block refers to a memory block without a cached data unit.

Step 302: the sending end determines the number of idle memory blocks according to the notification message

Step 303: and the sending end determines the target quantity according to the quantity of the idle memory blocks and the quantity of the data units to be sent.

Step 304: the sending end sends the target number of data units to the receiving end.

In this embodiment of the present application, when a sending end sends one or more data units according to a notification message sent by a receiving end, the notification message is used to indicate the number of free memory blocks in a receive queue configured in the receiving end. The sending end sends a target number of data units to the receiving end according to the number of the idle memory blocks and the number of the data units to be sent, and a memory application request does not need to be sent to the receiving end before the sending end transmits data each time, so that frequent interaction between the receiving end and the sending end in a data transmission process is avoided.

Fig. 4 is a flowchart of a data transmission method provided in an embodiment of the present application, where the method is applied to a data transmission system. Referring to fig. 4, the method includes the following steps;

step 401: the method comprises the steps that a sending end sends a negotiation message to a receiving end, and the receiving end receives the negotiation message sent by the sending end, wherein the negotiation message carries the size of a memory block to be created and the number of the memory blocks to be created.

In this embodiment of the present application, in order to avoid that the sending end needs to send a memory application request to the receiving end before sending data to the receiving end each time, the sending end may be configured to transmit data in units of data units, and a receiving queue is configured in the receiving end in advance, where the receiving queue is used to buffer the data units sent by the sending end to the receiving end, so that a subsequent receiving end notifies the sending end of the condition of the data units stored in the receiving queue, so that the sending end controls the number of the sent data units according to the condition of the data units stored in the receiving queue, that is, controls the sent traffic. Therefore, before transmitting data to the receiving end, the transmitting end needs to negotiate with the receiving end through steps 401 to 402 to configure the receiving queue at the receiving end.

In a possible implementation manner, referring to fig. 1, a first network card is configured on the sending end, a network card driving interface is disposed on the first network card, and a parameter setting module is built in the network card driving interface. And a second network card is configured on the receiving end, and a network card driving interface is also deployed on the second network card. In this case, the implementation process of step 401 may be: the network card driving interface on the first network card acquires the number of the memory blocks to be created and the size of the memory blocks to be created through a built-in parameter setting module, then generates a negotiation message according to the two parameters, and sends the negotiation message to the network card driving interface on the second network card, so that the sending end sends the negotiation message to the receiving end.

The network card driving interfaces of the first network card and the second network card may be marked as rdma _ connect, the parameter setting module may be marked as conn _ param, and the number of the memory blocks to be created and the size of the memory blocks to be created may be marked as private _ data.

Step 402: the receiving end creates a receiving queue according to the negotiation message, the number of the memory blocks included in the created receiving queue is equal to the number of the memory blocks to be created in the negotiation message, and the size of the memory blocks included in the created receiving queue is equal to the size of the memory blocks to be created in the negotiation message.

In this embodiment of the present application, when a sending end transmits data by using a data unit as a unit, in order to facilitate a receiving end to obtain a storage condition of the data unit in a receiving queue in time, a receiving queue configured in the receiving end is a receiving queue including a plurality of memory blocks, where each memory block is used to store one data unit, so that a subsequent receiving end can quickly determine the storage condition of the data unit in the receiving queue according to the number of idle memory blocks in the receiving queue. Therefore, when receiving the negotiation message sent by the sending end, the receiving end can create a receiving queue according to the negotiation message.

In a possible implementation manner, the receiving end may strictly create the receive queue according to the negotiation message sent by the sending end, that is, the number of the memory blocks in the created receive queue is equal to the size of the memory block to be created carried in the negotiation message, and the size of the memory block included in the created receive queue is equal to the size of the memory block to be created included in the negotiation message.

In another possible implementation manner, the receiving end may also create the receive queue according to the negotiation message sent by the sending end, that is, the size of the memory block is larger than the size of the memory block to be created included in the negotiation message, and the number of the memory blocks in the receive queue is larger than the size of the memory block to be created carried in the negotiation message.

For example, the size of the data unit used by the sending end to transmit data carried in the negotiation message is B, and at this time, the size of the memory block determined by the receiving end may be B, or may be a numerical value larger than B. The number of the memory blocks to be created included in the negotiation message carried in the negotiation message is P, and at this time, the number of the memory blocks in the created receiving queue may be P, or may be a numerical value larger than P.

In addition, after the receiving end successfully creates the receiving queue, the receiving end needs to reply an acknowledgement message to the sending end, so that the sending end can determine that the receiving end side has successfully created the receiving queue, and thus data transmission is started.

It should be noted that, if the number of the memory blocks in the created receive queue is greater than the number of the memory blocks to be created, which is carried in the negotiation message, at this time, the acknowledgment message also needs to carry the number of the memory blocks in the created receive queue, so that the sending end can know the basic situation of the created receive queue. Optionally, if the size of the memory block in the created receive queue is larger than the size of the memory block to be created included in the negotiation message, the acknowledgment message may also carry the size of the memory block in the created receive queue.

After configuring the receive queue at the receive end through the above steps 401 to 402, the transmit end may transmit data to the receive end through the following step 403. That is, in this embodiment of the present application, before a sending end sends data to a receiving end, the receiving end is configured with a receiving queue, where the receiving queue includes N memory blocks, one memory block is used to cache one data unit, and N is a positive integer greater than 1.

In addition, it should be noted that the N memory blocks may also be referred to as a memory pool, that is, in the embodiment of the present application, a memory pool with a certain capacity is created through the steps 401 to 402.

Step 403: the sending end sends one or more data units to the receiving end, the receiving end receives the one or more data units sent by the sending end, and the number of the one or more data units is smaller than N.

Based on step 401, the sending end is configured to transmit data in units of data units, and therefore, in a possible implementation manner, when the sending end needs to send data to the receiving end, the sending end may block the data to be sent to obtain one or more data units, and then send the one or more data units to the receiving end.

Step 404: the receiving end buffers each of one or more data units in a memory block in the receiving queue.

Since the receiving end receives the one or more data units sequentially, in step 405, each time the receiving end receives one data unit, the data unit may be cached in the memory block in the receive queue, so as to implement that each data unit in the one or more data units is cached in the one or more memory blocks in sequence.

In addition, in order to facilitate management of the memory blocks in the receive queue, the memory blocks in the receive queue may be arranged according to the sequence of the stored data units, and the sorted receive queue is marked as RQ. That is, the memory block with the earlier buffered data unit is arranged in the receiving queue near the head of the queue. Thus, the free memory block is arranged in the receiving queue near the tail of the queue. Subsequently, when the processor at the receiving end needs to process data, a memory block may be sequentially shifted out from the head of the receive queue to process the data units cached in the memory block, that is, the memory block is consumed from the receive queue.

Thus, in one possible implementation, the possible implementation of step 405 is: and sequentially caching each data unit in one or more data units in a space memory block in the receiving queue according to the sequence of the receiving time of each data unit in the one or more data units from early to late.

As shown in fig. 5, a memory block 0 at the head of a receiving queue created by a receiving end already stores a data unit, which indicates that the memory block is already occupied, when a sending end sends a data unit 1, a data unit 2, and a data unit 3 to the receiving end respectively according to a sequence, the receiving queue of the receiving end places the data unit 1 in the memory block 1, places the data unit 2 in the memory block 2, places the data unit 3 in the memory 3, places an idle memory block at the tail of the receiving queue, and the idle memory block is used for storing data units subsequently received by the receiving end.

Step 405: the receiving end sends a notification message used for indicating the number of the idle memory blocks in the receiving queue to the sending end, and the sending end acquires the notification message sent by the receiving end, wherein the idle memory blocks refer to memory blocks without cache data units.

In one possible implementation manner, the possible implementation procedures of step 405 are: when the receiving end caches data in the memory blocks in the cache, the receiving end can directly determine the number of the idle memory blocks in the current receiving queue, then carry the number of the idle memory blocks in the notification message, and send the notification message to the sending end.

Optionally, in the process of processing the memory blocks of the receive queue, in order to ensure that the capacity of the receive queue is fixed, each time the processor removes one memory block from the receive queue, one free memory block is added to the receive queue, so as to ensure that the number of memory blocks included in the receive queue is always kept to be N. Because the sending end can know the total number N of the memory chunks in the receiving queue configured by the receiving end in advance, the sending end can determine the number of the idle memory chunks in the receiving queue only by knowing the number of the shifted-out memory chunks in the receiving queue. Therefore, in another possible implementation manner, the possible implementation procedures of step 403 are: the receiving end monitors the receiving queue in real time, if the receiving queue is detected to have memory blocks shifted out of the receiving queue, idle memory blocks are added into the receiving queue, and the number of the added idle memory blocks is the same as the number of the shifted memory blocks; and sending a notification message to the sending end. At this time, the notification message carries the number of the moved memory blocks.

In addition, the receiving end may also create a completion queue, which may be marked as CQ, where the completion queue is used to store the removed memory blocks in the receive queue, that is, when the processor of the receiving end finishes processing one data unit of the receive queue, the processor of the receiving end removes the memory blocks occupied by the processed data unit from the receive queue and places the memory blocks in the completion queue. Therefore, the processor at the receiving end can monitor whether a new memory block is generated in the completion queue in real time, and when judging that the new memory block is generated, a notification message is sent to the sending end. For example, the processor may determine that the completion queue has received new data by determining that the computer code includes an opcode that is IBV _ WC _ RECV.

Furthermore, a counter is disposed on the receiving end, and the counter is used for recording the total number of the memory chunks removed from the receiving queue between the time when the receiving queue is created and the current time. That is, the counter is used to record the number of memory blocks in the completion queue. Since the memory block in the completion queue refers to the memory block processed by the processor on the receiving end, the counter may also be referred to as a processing counter.

At this time, the implementation process of the receiving end detecting that the memory block in the receiving queue is shifted out of the receiving queue is as follows: and if the number recorded by the counter changes, determining that the memory block in the receiving queue is moved out of the receiving queue. In this case, the implementation process of sending the notification message to the sending end may be: and sending a notification message carrying the number of the counter records to the sending end so that the number of the counter records of the sending end determines the number of the idle memory blocks in the receiving queue.

In addition, in the conventional RDMA technology, since the sending end and the receiving end do not involve intervention of a processor, when the sending end and the receiving end perform data transmission, a data link is usually created between the sending end and the receiving end to perform data transmission by using the data link, and at this time, data transmitted through the data link includes both valid data and control messages such as a memory request and the like. In the embodiment of the present application, in order to avoid that such control messages occupy the bandwidth of the valid data, a data link and a control link may be respectively created between the sending end and the receiving end, the valid data, which is a data unit, is transmitted through the data link, and the control messages, such as the negotiation message and the notification message, are transmitted through the control link, so as to avoid that the control messages occupy the valid bandwidth of the data link used by the data unit, thereby improving the data transmission performance.

Step 406: and the sending end determines the number of the idle memory blocks according to the notification message.

Based on step 405, in a scenario, if the notification message directly carries the number of the idle memory blocks, at this time, the sending end may directly obtain the number of the idle memory blocks from the notification message.

Optionally, in another scenario, if a counter is deployed on the receiving end, the counter is used to record the total number of memory chunks removed from the receiving queue between creation of the receiving queue and current time, and at this time, the notification message carries the number recorded by the counter. In this case, the implementation process of step 406 may be: determining the total number of data units sent by a sending end between the time of creating the receiving queue and the current time; and determining the number of the idle memory blocks according to the total number of the data units sent by the sending end, the N and the number recorded by the counter.

For example, when the total number S of data units sent by the sending end between the time when the receiving queue is created and the current time is determined, the total number of memory chunks of the receiving queue in the receiving end is N, and the number H recorded by the counter, it can be determined that the number of free memory chunks in the receiving queue is N- (S-H).

Step 407: and the sending end determines the target quantity according to the quantity of the idle memory blocks and the quantity of the data units to be sent.

In one possible implementation manner, the possible implementation procedures of step 407 are: and if the number of the free memory blocks is larger than or equal to the number of the data units to be sent, determining the number of the data units to be sent as a target number. And if the number of the idle memory blocks is smaller than the number of the data units to be sent, determining the number of the idle memory blocks as a target number. By the method, the sending end can control the sent flow according to the storage condition of the data units in the receiving queue in the receiving end.

For example, if the number of currently free memory blocks is 7. And the number 3 of data units to be sent, the number 3 of data units to be sent is determined as the target number. If the number of currently free memory blocks is 2. And the number 3 of the data units to be sent determines the number 2 of the current idle memory blocks as the target number.

Step 408: the sending end sends the target number of data units to the receiving end.

The target number may be one or more, and after the sender sends the target number of data units to the receiver, the receiver may continue to process the data units sent by the sender through step 404. In this embodiment, a receiving end is configured with a receiving queue, where the receiving queue includes N memory blocks, one memory block is used to cache one data unit, and N is a positive integer greater than 1. That is, in this embodiment of the present application, a receiving queue with a fixed capacity is configured at a receiving end, and when the receiving end receives one or more data units sent by a sending end, each data unit in the one or more data units is cached in one memory block in the receiving queue, and the receiving end sends a notification message to the sending end in time, so that the sending end can send the data units according to the number of idle memory blocks, and it is not necessary for the sending end to send a memory application request to the receiving end before transmitting data each time, thereby avoiding frequent interaction between the receiving end and the sending end in a data transmission process.

All the above optional technical solutions can be combined arbitrarily to form an optional embodiment of the present application, and the present application embodiment is not described in detail again.

Fig. 6 is a schematic structural diagram of a data transmission apparatus provided in an embodiment of the present application, where the data transmission apparatus may be implemented by software, hardware, or a combination of the two. The data transmission apparatus may include:

a first receiving module 601, configured to receive one or more data units sent by a sending end, where the number of the one or more data units is less than N;

a caching module 602, configured to cache the one or more data units in a memory block included in the receive queue;

a sending module 603, configured to send, to the sending end, a notification message used for indicating the number of idle memory blocks in the receive queue, so that the sending end continues to send data units according to the number of the idle memory blocks, where the idle memory blocks refer to memory blocks that do not cache data units.

Optionally, the sending module includes:

creating a submodule, configured to add idle memory blocks to the receive queue if it is detected that a memory block in the receive queue is removed from the receive queue, where the number of the added idle memory blocks is the same as the number of the removed memory blocks;

and the first sending submodule is used for sending the notification message to the sending end.

Optionally, a counter is disposed on the receiving end, and the counter is used to record the total number of memory chunks removed from the receiving queue between creation of the receiving queue and current time;

creating a submodule, further configured to:

if the number recorded by the counter changes, determining that the memory block in the receiving queue is removed from the receiving queue;

a first sending submodule, further configured to:

and sending a notification message carrying the number of the counter records to the sending end so that the sending end determines the number of the idle memory blocks in the receiving queue according to the number of the counter records.

Optionally, the apparatus further comprises:

a second receiving module, configured to receive a negotiation message sent by the sending end, where the negotiation message carries the size of the memory block to be created and the number of the memory blocks to be created;

a creating module, configured to create the receive queue according to the negotiation message, where a number of the memory blocks included in the created receive queue is equal to a number of the memory blocks to be created in the negotiation message, and a size of the memory blocks included in the created receive queue is equal to a size of the memory blocks to be created in the negotiation message.

Optionally, the sending end transmits data to the receiving end based on a remote direct memory access RDMA mode;

the first receiving module includes:

a receiving submodule, configured to receive one or more data units through a data link between a receiving end and a transmitting end;

a second sending submodule, configured to send, to the sending end, a notification message used for indicating the number of idle memory blocks in the receive queue, where the notification message includes:

and the third sending submodule is used for sending the notification message through a control link between the receiving end and the sending end.

In this embodiment of the present application, in the receiving end of the present application, a receiving queue is configured, where the receiving queue includes N memory blocks, one memory block is used to cache one data unit, and N is a positive integer greater than 1. That is, in this embodiment of the present application, a receiving queue with a fixed capacity is configured at a receiving end, and when the receiving end receives one or more data units sent by a sending end, each data unit in the one or more data units is cached in one memory block in the receiving queue, and the receiving end sends a notification message to the sending end in time, so that the sending end can send the data units according to the number of idle memory blocks, and it is not necessary for the sending end to send a memory application request to the receiving end before transmitting data each time, thereby avoiding frequent interaction between the receiving end and the sending end in a data transmission process.

Fig. 7 is a schematic structural diagram of a data transmission apparatus provided in an embodiment of the present application, where the data transmission apparatus may be implemented by software, hardware, or a combination of the two. The data transmission apparatus may include:

an obtaining module, configured to obtain a notification message sent by a receiving end, where the notification message is used to indicate the number of idle memory blocks in a receiving queue configured in the receiving end, where the receiving queue includes N memory blocks, one memory block is used to cache one data unit sent by a sending end, N is a positive integer greater than 1, and an idle memory block is a memory block that does not cache a data unit;

a first determining module, configured to determine, according to the notification message, the number of idle memory blocks;

a second determining module, configured to determine a target number according to the number of idle memory blocks and the number of data units to be sent;

and the first sending module is used for sending the target number of data units to the receiving end.

Optionally, the second determining module is configured to:

and if the number of the free memory blocks is larger than or equal to the number of the data units to be sent, determining the number of the data units to be sent as a target number.

Optionally, the second determining module is further configured to:

and if the number of the idle memory blocks is smaller than the number of the data units to be sent, determining the number of the idle memory blocks as a target number.

Optionally, a counter is disposed on the receiving end, and the counter is configured to record the total number of memory blocks removed from the receiving queue between creation of the receiving queue and current time, and notify the message of the number of the memory blocks recorded by the counter;

the first determining module includes:

a first determining submodule, configured to determine a total number of data units sent by a sending end between creation of a receive queue and current time;

and the second determining submodule is used for determining the number of the idle memory blocks according to the total number of the data units sent by the sending end, the N and the number recorded by the counter.

Optionally, the apparatus further comprises:

and the second sending module is used for sending a negotiation message to the receiving end, wherein the negotiation message carries the size of the data unit and the total number of the data units used by the sending end when the sending end transmits the data, so that the receiving end creates a receiving queue according to the negotiation message.

In this embodiment of the present application, when a sending end sends one or more data units according to a notification message sent by a receiving end, the notification message is used to indicate the number of free memory blocks in a receive queue configured in the receiving end. The sending end sends a target number of data units to the receiving end according to the number of the idle memory blocks and the number of the data units to be sent, and a memory application request does not need to be sent to the receiving end before the sending end transmits data each time, so that frequent interaction between the receiving end and the sending end in a data transmission process is avoided.

It should be noted that: in the data transmission device provided in the above embodiment, only the division of the functional modules is illustrated in the data transmission, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the data transmission device and the data transmission method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.

The embodiment of the present application further provides a non-transitory computer-readable storage medium, and when instructions in the storage medium are executed by a processor of a terminal, the terminal is enabled to execute the data transmission method provided in the above embodiment.

The embodiment of the present application further provides a computer program product containing instructions, which when run on a terminal, causes the terminal to execute the data transmission method provided by the foregoing embodiment.

Fig. 8 is a schematic structural diagram of a server according to an embodiment of the present application. The server may be a server in a cluster of background servers. The server is applied to the sending end and the receiving end in the embodiment of the application. Specifically, the method comprises the following steps:

the server 800 includes a Central Processing Unit (CPU)801, a system memory 804 including a Random Access Memory (RAM)802 and a Read Only Memory (ROM)803, and a system bus 805 connecting the system memory 804 and the central processing unit 801. The server 800 also includes a basic input/output system (I/O system) 806, which facilitates transfer of information between devices within the computer, and a mass storage device 807 for storing an operating system 813, application programs 814, and other program modules 815.

The basic input/output system 806 includes a display 808 for displaying information and an input device 809 such as a mouse, keyboard, etc. for user input of information. Wherein a display 808 and an input device 809 are connected to the central processing unit 801 through an input output controller 810 connected to the system bus 805. The basic input/output system 806 may also include an input/output controller 810 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 810 also provides output to a display screen, a printer, or other type of output device.

The mass storage device 807 is connected to the central processing unit 801 through a mass storage controller (not shown) connected to the system bus 805. The mass storage device 807 and its associated computer-readable media provide non-volatile storage for the server 800. That is, the mass storage device 807 may include a computer-readable medium (not shown) such as a hard disk or CD-ROM drive.

Without loss of generality, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that computer storage media is not limited to the foregoing. The system memory 804 and mass storage 807 described above may be collectively referred to as memory.

According to various embodiments of the present application, server 800 may also operate as a remote computer connected to a network through a network, such as the Internet. That is, the server 800 may be connected to the network 812 through the network interface unit 811 coupled to the system bus 805, or may be connected to other types of networks or remote computer systems (not shown) using the network interface unit 811.

The memory further includes one or more programs, and the one or more programs are stored in the memory and configured to be executed by the CPU.

Embodiments of the present application further provide a non-transitory computer-readable storage medium, and when instructions in the storage medium are executed by a processor of a server, the server is enabled to execute the data transmission method provided in the foregoing embodiments.

Embodiments of the present application further provide a computer program product containing instructions, which when run on a server, cause the server to execute the data transmission method provided in the foregoing embodiments.

It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.

The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种IASS网络平台下处理ARP数据包的优化方法及其系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!