Scheduling method, device and storage medium of cut-through forwarding mode

文档序号:750543 发布日期:2021-04-02 浏览:13次 中文

阅读说明:本技术 直通转发模式的调度方法、设备及存储介质 (Scheduling method, device and storage medium of cut-through forwarding mode ) 是由 徐子轩 夏杰 于 2020-12-07 设计创作,主要内容包括:本发明提供一种直通转发模式的调度方法、设备及存储介质,所述方法包括:在直通转发模式下接收报文;若主链表为空,则存储当前报文,并同步将报文的存储地址链接至主链表;若主链表不为空,且前一个报文在主链表中链接完成,则存储当前报文,并同步将当前报文的存储地址链接至主链表;若主链表不为空,前一个报文在主链表中链接未完成,则存储当前报文,并在当前报文存储完成后,将当前报文的存储地址链接至从链表;若从链表不为空,则实时监测主链表状态,在最近一个报文分片携带的结束位标识在主链表链接完成时,将从链表的内容转移并链接至主链表当前存储的结束位标识对应的地址上。本发明实现芯片单播直通转发功能基础上,优化逻辑物理开销。(The invention provides a method, equipment and a storage medium for scheduling a cut-through forwarding mode, wherein the method comprises the following steps: receiving a message in a direct forwarding mode; if the main linked list is empty, storing the current message, and synchronously linking the storage address of the message to the main linked list; if the main chain table is not empty and the previous message is linked in the main chain table, storing the current message and synchronously linking the storage address of the current message to the main chain table; if the main chain table is not empty, the previous message is not linked in the main chain table completely, the current message is stored, and after the current message is stored completely, the storage address of the current message is linked to the slave chain table; and if the slave linked list is not empty, monitoring the state of the main linked list in real time, and transferring and linking the content of the slave linked list to an address corresponding to the currently stored end bit identifier of the main linked list when the end bit identifier carried by the latest message fragment is linked to the main linked list. The invention optimizes the logic physical expense on the basis of realizing the unicast direct forwarding function of the chip.)

1. A method for scheduling in a cut-through forwarding mode, the method comprising:

configuring a main chain table and a slave chain table with the same structure; the main chain table comprises: a main chain table memory, a main head pointer register and a main tail pointer register; the slave linked list includes: a slave linked list memory, a slave head pointer register, and a slave tail pointer register;

receiving messages in a direct forwarding mode, wherein each message comprises at least one message fragment, and one of the message fragments carries a start bit identifier and/or an end bit identifier;

if the main linked list is empty, storing the current message, and synchronously linking the storage address of the message to the main linked list;

if the main chain table is not empty and the previous message is linked in the main chain table, storing the current message and synchronously linking the storage address of the current message to the main chain table;

if the main chain table is not empty, the previous message is not linked in the main chain table completely, the current message is stored, and after the current message is stored completely, the storage address of the current message is linked to the slave chain table;

and if the slave linked list is not empty, monitoring the state of the main linked list in real time, and transferring and linking the content of the slave linked list to an address corresponding to the currently stored end bit identifier of the main linked list when the end bit identifier carried by the latest message fragment is linked to the main linked list.

2. The method of claim 1, wherein if the primary linked list is empty, storing the current packet, and synchronously linking the storage address of the packet to the primary linked list comprises:

writing a queue head address corresponding to the message fragment carrying the start bit identifier into a main head pointer register, and writing a queue tail address corresponding to the message fragment carrying the end bit identifier into a main tail pointer register;

if the number of the message fragments included in the current message exceeds 1, the message fragments carrying the start bit identifier are excluded, and the storage addresses corresponding to other sequentially arranged message fragments are respectively stored in the main chain table memory and are linked according to the arrangement order.

3. The method according to claim 1, wherein if the primary chain table is not empty and the previous packet is linked in the primary chain table, storing the current packet and synchronously linking the storage address of the current packet to the primary chain table comprises:

writing a queue head address corresponding to the message fragment carrying the start bit identifier into a main chain table memory, and linking to a queue tail address currently stored in a main tail pointer register corresponding to a main chain table memory;

meanwhile, writing the queue tail address corresponding to the message fragment carrying the end bit identifier into a main tail pointer register and replacing the main tail pointer register;

if the number of the message fragments included in the current message exceeds 1, the message fragments carrying the start bit identifier are excluded, and the storage addresses corresponding to other sequentially arranged message fragments are respectively stored in the main chain table memory and are linked according to the arrangement order.

4. The method according to claim 1, wherein if the master chain table is not empty and the previous packet is not linked in the master chain table completely, storing the current packet, and after the current packet is stored completely, linking the storage address of the current packet to the slave chain table comprises:

if the current slave linked list is empty, writing a queue head address corresponding to the message fragment carrying the start bit identifier into a head pointer register, and writing a queue tail address corresponding to the message fragment carrying the end bit identifier into a slave tail pointer register;

if the current slave linked list is not empty, writing the queue head address corresponding to the message fragment carrying the start bit identifier into a slave linked list memory, and linking to the queue tail address corresponding to the slave tail pointer register of the slave linked list memory; meanwhile, writing the queue tail address corresponding to the message fragment carrying the end bit identifier into a slave tail pointer register and replacing the slave tail pointer register;

if the number of the message fragments included in the current message exceeds 1, excluding the message fragments carrying the start bit identifier, storing the storage addresses corresponding to other sequentially arranged message fragments in the slave linked list memory respectively, and linking according to the arrangement order.

5. The method according to claim 1, wherein if the slave chain table is not empty, monitoring the state of the master chain table in real time, and when the end bit identifier carried by the latest packet fragment is linked to the master chain table, transferring and linking the content of the slave chain table to the address corresponding to the currently stored end bit identifier of the master chain table comprises:

transferring and writing the queue head address stored in the head pointer register into a main chain table memory, and linking the queue head address to the queue tail address currently stored in a main tail pointer register corresponding to a main chain table memory;

meanwhile, the queue tail address stored from the tail pointer register is transferred and replaces the main tail pointer register.

6. The method according to any of claims 2 to 5, wherein a main queue read status register is configured for the main chain table, and whether the main queue read status register is enabled is queried to obtain whether the current main chain table is empty;

and configuring a slave queue reading state register for the slave linked list, and inquiring whether the slave queue reading register is enabled or not to acquire whether the current slave linked list is empty or not.

7. The method for scheduling cut-through forwarding mode according to any of claims 2 to 5, wherein the method further comprises: configuring a main queue write state register, wherein the main queue write state register is used for identifying whether the message fragment can perform link operation on a main chain table;

if the write state register state of the main queue is enabled, the message fragmentation indicates that any message fragmentation can perform link operation on the main chain table;

if the write state register state of the main queue is disabled, it indicates that the previous message is performing a link operation on the main chain table, and the message fragment carrying the end bit identifier in the previous message has not completed the link operation, and only the message fragment corresponding to the previous message may perform the link operation on the main chain table.

8. The method for scheduling cut-through forwarding mode according to any of claims 2 to 5, wherein the method further comprises:

configuring a linked list state register, wherein each storage space of the linked list state register correspondingly stores a linked list writing state of each source channel;

if the storage position corresponding to any source channel is enabled, the message fragment from the source channel can forcibly link the main chain table;

and if the storage position corresponding to any source channel is not enabled, indicating that the message fragment from the source channel cannot perform link operation on the main chain table.

9. An electronic device comprising a memory and a processor, said memory storing a computer program operable on said processor, wherein said processor implements the steps in the method of scheduling in cut-through forwarding mode according to any of claims 1-8 when executing said program.

10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method for scheduling in cut-through forwarding mode according to any one of claims 1 to 8.

Technical Field

The invention belongs to the technical field of communication, and mainly relates to a scheduling method, equipment and a storage medium of a direct forwarding mode.

Background

In high density network chips, there are a large number of packet storage-scheduling requirements. A typical packet store-and-dispatch model is shown in fig. 1, where the input signals include: { queue number, data, linked list address (write information address) }.

The storage scheduling model mainly comprises the following modules: and a data storage which buffers 'data' according to a 'write information address' of the input signal. The linked list control module is used for controlling the conventional linked list enqueue and dequeue operations; the control of the linked list belongs to the general technical category, and the invention is not described in detail; the linked list control module mainly comprises four sub-modules: { head pointer memory, tail pointer memory, linked list memory, queue read status }. The head pointer memory is used for storing a storage address pointed by the data head pointer, the tail pointer memory is used for storing a storage address pointed by the data tail pointer, and the linked list memory is used for storing a storage address corresponding to data; the queue reading state is used for indicating the state of the linked list control module, when the state is '0', the queue does not have other data to be scheduled, and when the state is '1', the queue has other data to be scheduled. And if the queue reading state is 1, the scheduler participates in scheduling, and sends the scheduled queue to the linked list control module to acquire the read linked list address of the queue and trigger the linked list control module to update the queue reading state information. And the information reading module accesses the data memory according to the read 'linked list address' acquired by the scheduler to obtain data and outputs the data.

The core function of the network chip is to implement forwarding of the message. The forwarding modes can be divided into two types: a store-and-forward mode and a cut-through forwarding mode.

In the store-and-forward mode, a message needs to be completely cached in a memory in a chip in a linked list (the linked list is represented as a message data linked list), and a destination port of the message is determined according to forward logic. Specifically, an enqueue request is generated, key information (a starting address of a message, a message length and the like) of the message is written into a linked list (the linked list is expressed as an information linked list) of the queue, and the scheduling of the queue is waited; then selecting the queue through a certain QOS (Quality of Service) strategy, scheduling key information in the queue, and sending the queue to a 'message data reading module'; and reading the message from the memory according to the message initial address in the key information, and sending the message to the destination port.

In order to speed up message storage and reading and improve network chip performance, a direct forwarding mode is usually adopted, and the direct forwarding mode does not need to wait until a message is completely cached in a memory of a chip, and can determine a destination port of the message according to forwarding logic.

In the direct forwarding mode, an independent QoS module is arranged, and key information of different messages is connected in series in a linked list mode; and selecting the queue through a certain strategy, reading the key information corresponding to the first address in the information linked list, sending the 'message data linked list first address' in the key information to a 'message reading module' for message reading operation, updating the QoS state of the queue by using the message length in the key information, and updating the linked list state of the queue to wait for next scheduling.

In the direct forwarding mode, the scheduling is started because the complete message is not cached in a memory of the chip; when key information is generated, the real message length cannot be obtained; therefore, when the QoS module updates the internal state, the QoS module cannot update the internal state (such as traffic shaping) using the real message length information, thereby affecting the accuracy of QoS; namely: when reading a memory packet, the packet may not be completely cached in the chip, and thus, a situation that the packet data cannot be read may occur.

In addition, the existing direct forwarding mode needs two independent linked lists, namely a data linked list and an information linked list, and when the packet buffer of a network chip is large, the physical area consumed by the network chip is large; in addition, due to the existence of two linked list reading and writing operations, the forwarding delay of the message is large; the two linked list reading and writing operations inevitably introduce other memories for caching, and further increase the physical expense of logic.

Disclosure of Invention

In order to solve the above technical problems, an object of the present invention is to provide a method for scheduling a cut-through forwarding mode, a network chip and a readable storage medium.

In order to achieve one of the above objects, an embodiment of the present invention provides a method for scheduling a cut-through forwarding mode, where the method includes: configuring a main chain table and a slave chain table with the same structure; the main chain table comprises: a main chain table memory, a main head pointer register and a main tail pointer register; the slave linked list includes: a slave linked list memory, a slave head pointer register, and a slave tail pointer register;

receiving messages in a direct forwarding mode, wherein each message comprises at least one message fragment, and one of the message fragments carries a start bit identifier and/or an end bit identifier;

if the main linked list is empty, storing the current message, and synchronously linking the storage address of the message to the main linked list;

if the main chain table is not empty and the previous message is linked in the main chain table, storing the current message and synchronously linking the storage address of the current message to the main chain table;

if the main chain table is not empty, the previous message is not linked in the main chain table completely, the current message is stored, and after the current message is stored completely, the storage address of the current message is linked to the slave chain table;

and if the slave linked list is not empty, monitoring the state of the main linked list in real time, and transferring and linking the content of the slave linked list to an address corresponding to the currently stored end bit identifier of the main linked list when the end bit identifier carried by the latest message fragment is linked to the main linked list.

As a further improvement of an embodiment of the present invention, if the primary linked list is empty, storing the current packet, and synchronously linking the storage address of the packet to the primary linked list includes:

writing a queue head address corresponding to the message fragment carrying the start bit identifier into a main head pointer register, and writing a queue tail address corresponding to the message fragment carrying the end bit identifier into a main tail pointer register;

if the number of the message fragments included in the current message exceeds 1, the message fragments carrying the start bit identifier are excluded, and the storage addresses corresponding to other sequentially arranged message fragments are respectively stored in the main chain table memory and are linked according to the arrangement order.

As a further improvement of an embodiment of the present invention, if the primary chain table is not empty and the previous packet is linked in the primary chain table, storing the current packet and synchronously linking the storage address of the current packet to the primary chain table includes:

writing a queue head address corresponding to the message fragment carrying the start bit identifier into a main chain table memory, and linking to a queue tail address currently stored in a main tail pointer register corresponding to a main chain table memory;

meanwhile, writing the queue tail address corresponding to the message fragment carrying the end bit identifier into a main tail pointer register and replacing the main tail pointer register;

if the number of the message fragments included in the current message exceeds 1, the message fragments carrying the start bit identifier are excluded, and the storage addresses corresponding to other sequentially arranged message fragments are respectively stored in the main chain table memory and are linked according to the arrangement order.

As a further improvement of an embodiment of the present invention, if the master chain table is not empty and the previous packet is not linked in the master chain table completely, storing the current packet, and linking the storage address of the current packet to the slave chain table after the current packet is stored completely includes:

if the current slave linked list is empty, writing a queue head address corresponding to the message fragment carrying the start bit identifier into a head pointer register, and writing a queue tail address corresponding to the message fragment carrying the end bit identifier into a slave tail pointer register;

if the current slave linked list is not empty, writing the queue head address corresponding to the message fragment carrying the start bit identifier into a slave linked list memory, and linking to the queue tail address corresponding to the slave tail pointer register of the slave linked list memory; meanwhile, writing the queue tail address corresponding to the message fragment carrying the end bit identifier into a slave tail pointer register and replacing the slave tail pointer register;

if the number of the message fragments included in the current message exceeds 1, excluding the message fragments carrying the start bit identifier, storing the storage addresses corresponding to other sequentially arranged message fragments in the slave linked list memory respectively, and linking according to the arrangement order.

As a further improvement of an embodiment of the present invention, if the slave chain table is not empty, monitoring the state of the master chain table in real time, and when the end bit identifier carried by the latest packet fragment is linked to the master chain table, transferring and linking the content of the slave chain table to the address corresponding to the currently stored end bit identifier of the master chain table includes:

transferring and writing the queue head address stored in the head pointer register into a main chain table memory, and linking the queue head address to the queue tail address currently stored in a main tail pointer register corresponding to a main chain table memory;

meanwhile, the queue tail address stored from the tail pointer register is transferred and replaces the main tail pointer register.

As a further improvement of an embodiment of the present invention, a main queue read status register is configured for a main chain table, and whether the main queue read status register is enabled is queried to obtain whether a current main chain table is empty;

and configuring a slave queue reading state register for the slave linked list, and inquiring whether the slave queue reading register is enabled or not to acquire whether the current slave linked list is empty or not.

As a further improvement of an embodiment of the present invention, the method further comprises: configuring a main queue write state register, wherein the main queue write state register is used for identifying whether the message fragment can perform link operation on a main chain table;

if the write state register state of the main queue is enabled, the message fragmentation indicates that any message fragmentation can perform link operation on the main chain table;

if the write state register state of the main queue is disabled, it indicates that the previous message is performing a link operation on the main chain table, and the message fragment carrying the end bit identifier in the previous message has not completed the link operation, and only the message fragment corresponding to the previous message may perform the link operation on the main chain table.

As a further improvement of an embodiment of the present invention, the method further comprises:

configuring a linked list state register, wherein each storage space of the linked list state register correspondingly stores a linked list writing state of each source channel;

if the storage position corresponding to any source channel is enabled, the message fragment from the source channel can forcibly link the main chain table;

and if the storage position corresponding to any source channel is not enabled, indicating that the message fragment from the source channel cannot perform link operation on the main chain table.

In order to achieve one of the above objects, an embodiment of the present invention provides an electronic device, which includes a memory and a processor, wherein the memory stores a computer program operable on the processor, and the processor executes the computer program to implement the steps in the scheduling method of the cut-through forwarding mode as described above.

In order to achieve one of the above objects, an embodiment of the present invention provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the scheduling method of the cut-through forwarding mode as described above.

Compared with the prior art, the invention has the beneficial effects that: the method, the device and the storage medium for scheduling the cut-through forwarding mode optimize the logic physical overhead and improve the accuracy of QoS on the basis of realizing the unicast cut-through forwarding function of a chip.

Drawings

FIG. 1 is a diagram of a data storage-scheduling model provided in the background art;

fig. 2 is a schematic flowchart of a scheduling method in a cut-through forwarding mode according to an embodiment of the present invention;

fig. 3 and 4 are schematic diagrams illustrating a write data control principle according to a specific example of the present invention.

Detailed Description

The present invention will be described in detail below with reference to specific embodiments shown in the drawings. These embodiments are not intended to limit the present invention, and structural, methodological, or functional changes made by those skilled in the art according to these embodiments are included in the scope of the present invention.

As shown in fig. 2, a method for scheduling a cut-through forwarding mode according to an embodiment of the present invention includes:

s1, configuring a main chain table and a slave chain table with the same structure; the main chain table comprises: a main chain table memory, a main head pointer register and a main tail pointer register; the slave linked list includes: a slave linked list memory, a slave head pointer register, and a slave tail pointer register;

s2, receiving messages in a direct forwarding mode, wherein each message comprises at least one message fragment, and one of the message fragments carries a start bit identifier and/or an end bit identifier;

s31, if the primary linked list is empty, storing the current message, and synchronously linking the storage address of the message to the primary linked list;

s32, if the main chain table is not empty and the previous message is linked in the main chain table, storing the current message and synchronously linking the storage address of the current message to the main chain table;

s33, if the master chain table is not empty, the previous message is not linked in the master chain table completely, the current message is stored, and after the current message is stored completely, the storage address of the current message is linked to the slave chain table;

and S34, if the slave linked list is not empty, monitoring the state of the main linked list in real time, and transferring and linking the content of the slave linked list to the address corresponding to the currently stored end bit identifier of the main linked list when the end bit identifier carried by the latest message fragment is linked to the main linked list.

For step S1, the storage scheduling model of the present invention also includes a data store for storing data; suppose the real length of the transfer message (data) is L, and the bit width of the data memory is W; then, the packet is segmented into N segments, where N is int (L/W), and int is an upward rounding function; for storage in the data storage.

For step S2, the messages in the network chip are aggregated by the MAC, and each message is written into the data storage according to the sequence of the message fragments L0, L1, … LN-1; in a specific example of the present invention, a flag bit carrying a sop (start of packet) is configured for a packet fragment L0, that is, a flag carrying a start bit identifier; the message of LN-1 carries an eop (end of packet) flag bit, namely an end bit identifier; further distinguishing the attribute of each fragment of the message; it can be understood that when the message length is smaller than or equal to W, the message fragment carries the sop/eop flag bit at the same time.

When a memory needs to cache message fragments, firstly applying for an address from a data memory, and setting the address as ptr _ X; whether the current packet is written to the master chain table or the slave chain table, it needs to operate as follows. Specifically, if the current packet fragment is sop, it indicates that the current fragment is the first fragment of the entire packet data, and at this time, the ptr _ X is used to update the "head address memory" and the "tail address memory" of the source channel where the packet is located; if the current fragment is not sop, using the value in the 'tail address memory' as an address, using ptr _ X as a value, writing the value into a 'data linked list', and simultaneously writing the ptr _ X into the 'tail address memory'; the write chain table operation of the present invention is described in more detail below.

For the steps S31, S32, S33, and S34, this is for convenience of description only, and they may be executed sequentially or in parallel, and the order of the steps does not affect the output result.

In a specific embodiment of the present invention, regarding step S31, the method specifically includes: writing a queue head address corresponding to the message fragment carrying the start bit identifier into a main head pointer register, and writing a queue tail address corresponding to the message fragment carrying the end bit identifier into a main tail pointer register;

if the number of the message fragments included in the current message exceeds 1, the message fragments carrying the start bit identifier are excluded, and the storage addresses corresponding to other sequentially arranged message fragments are respectively stored in the main chain table memory and are linked according to the arrangement order.

In a specific embodiment of the present invention, regarding step S32, the method specifically includes: writing a queue head address corresponding to the message fragment carrying the start bit identifier into a main chain table memory, and linking to a queue tail address currently stored in a main tail pointer register corresponding to a main chain table memory;

meanwhile, writing the queue tail address corresponding to the message fragment carrying the end bit identifier into a main tail pointer register and replacing the main tail pointer register;

if the number of the message fragments included in the current message exceeds 1, the message fragments carrying the start bit identifier are excluded, and the storage addresses corresponding to other sequentially arranged message fragments are respectively stored in the main chain table memory and are linked according to the arrangement order.

In a specific embodiment of the present invention, regarding step S33, the method specifically includes: if the master chain table is not empty, the previous message is not linked in the master chain table completely, the current message is stored, and after the current message is stored completely, the step of linking the storage address of the current message to the slave chain table comprises the following steps:

if the current slave linked list is empty, writing a queue head address corresponding to the message fragment carrying the start bit identifier into a head pointer register, and writing a queue tail address corresponding to the message fragment carrying the end bit identifier into a slave tail pointer register;

if the current slave linked list is not empty, writing the queue head address corresponding to the message fragment carrying the start bit identifier into a slave linked list memory, and linking to the queue tail address corresponding to the slave tail pointer register of the slave linked list memory; meanwhile, writing the queue tail address corresponding to the message fragment carrying the end bit identifier into a slave tail pointer register and replacing the slave tail pointer register;

if the number of the message fragments included in the current message exceeds 1, excluding the message fragments carrying the start bit identifier, storing the storage addresses corresponding to other sequentially arranged message fragments in the slave linked list memory respectively, and linking according to the arrangement order.

In a specific embodiment of the present invention, regarding step S34, the method specifically includes: if the slave linked list is not empty, the state of the main linked list is monitored in real time, and when the end bit identifier carried by the latest message fragment is linked to the main linked list, the method for transferring and linking the content of the slave linked list to the address corresponding to the currently stored end bit identifier of the main linked list comprises the following steps:

transferring and writing the queue head address stored in the head pointer register into a main chain table memory, and linking the queue head address to the queue tail address currently stored in a main tail pointer register corresponding to a main chain table memory;

meanwhile, transferring the queue tail address stored in the slave tail pointer register and replacing the master tail pointer register;

in doing so, the entire contents of the slave linked list will be transferred from and linked to the slave linked list memory.

In a preferred embodiment of the present invention, a primary queue read status register, a secondary queue read status register, a primary queue write status register, and a linked list status register are configured for reading the status of each linked list, as will be described in detail below.

Specifically, a main queue reading state register is configured for the main chain table, and whether the main queue reading state register is enabled or not is inquired to obtain whether the current main chain table is empty or not; and inquiring whether the slave queue reading register is enabled or not to acquire whether the current slave linked list is empty or not.

The main queue write state register is used for identifying whether the message fragment can perform link operation on a main chain table; if the write state register state of the main queue is enabled, the message fragmentation indicates that any message fragmentation can perform link operation on the main chain table; if the write state register state of the main queue is disabled, it indicates that the previous message is performing a link operation on the main chain table, and the message fragment carrying the end bit identifier in the previous message has not completed the link operation, and only the message fragment corresponding to the previous message may perform the link operation on the main chain table.

Each storage space of the linked list state register correspondingly stores the linked list writing state of each source channel; if the storage position corresponding to any source channel is enabled, the message from the source channel can forcibly link the main chain table; and if the storage position corresponding to any source channel is not enabled, indicating that the current message from the source channel cannot perform link operation on the main chain table. Essentially, the linked list status register is used to identify which source channel message fragment can continue to be linked to the main chain table when the queue write status register bit is not enabled.

A specific example is described below in conjunction with fig. 3 and 4 to facilitate understanding.

A1, receiving a message M1, and generating an enqueue request, wherein the enqueue request can be generated in any message fragment of the message, and can be specifically determined by an actual strategy, which is not limited in the invention;

a2, using the 'queue number' of the message M1 as an address, reading a main queue reading state register, if the state of the main queue reading state register is '0', indicating that a main chain table is empty, entering A3, and if the state of the main queue reading state register is '1', indicating that the main chain table is not empty, entering A4;

a3, executing step S31, dividing the message M1 into 2 message fragments, namely a message fragment M11 and a message fragment M12, wherein the message fragment M11 carries sop, and the message fragment M12 carries eop, when the storage of the message fragment M11 is finished, writing the address of the message fragment M11 as a value, using the queue number of the message fragment M11 as an address, respectively writing the address into a main head pointer register and a main tail pointer register, setting the state of a main queue read state register to be '1', setting the state of a queue write state register to be '0', and setting the state of the queue write state register to be '0' to indicate that the state is not enabled;

when the message fragment M12 is stored, the address of the message fragment M12 is used as a value, the address pointed by the current main tail pointer is used as an address, and the address is written into a main linked list memory to finish the link operation; the address of the message fragment M12 is used as a value to be written into a main tail pointer register to complete the updating of the main tail pointer register, the state of a main queue read state register is kept to be 1, the state of a queue write state register is set to be 1, and the state of the queue write state register is set to be 1, which indicates enabling;

a4, when a new message arrives, inquiring that the state of the read state register of the main queue is '1', which indicates that the main chain table is not empty, at this time, two situations exist:

case 1, the query gets: the write status register of the main queue is '1', namely the write status register of the main queue is enabled, the main chain table is not empty, the previous message is linked in the main chain table, the current new message fragment can perform the link operation on the main chain table, and the step A5 is entered;

case 2, the query gets: the write status register of the main queue is '0', namely the write status register of the main queue is not enabled, which indicates that the previous message is not linked in the main chain table completely, the main chain table is not empty, and the previous message is not linked in the main chain table completely; at this time, the main chain table can only continue to receive the link operation of the previous message, and the new message executes the step a 6;

a5, when a new message M2 arrives, step S32 is executed, where the message M2 is divided into 2 message fragments, a message fragment M21 and a message fragment M22, the message fragment M21 carries sop, and the message fragment M22 carries eop.

When the message M2 comes, the corresponding linked list status register is inquired to be '1'; that is, the sending message M2 may be linked to the primary linked list;

when the message fragment M21 is stored, the address of the message fragment M21 is used as a value, the address pointed by the current main tail pointer is used as an address, and the address is written into a main linked list memory to finish the link operation; writing the address of the message fragment M21 as a value into a main tail pointer register to complete the updating of the main tail pointer register, keeping the state of a main queue read state register as '1', and setting the state of a queue write state register as '0';

when the message fragment M22 is stored, the address of the message fragment M22 is used as a value, the address pointed by the current main tail pointer is used as an address, and the address is written into a main linked list memory to finish the link operation; writing the address of the message fragment M22 as a value into a main tail pointer register to complete the updating of the main tail pointer register, keeping the state of a main queue read state register as '1', and setting the state of a queue write state register as '1';

a6, message M3 is linking with the main linked list, and message M3 carries eop message fragments which are not linked with the main linked list, at this time, message M4 is stored and sends a linking request to the linked list;

the message M4 is divided into a message fragment M41 and a message fragment M42, wherein the message fragment M41 carries sop, and the message fragment M42 carries eop; it should be noted that, in order to avoid link errors and save hardware space, one of the conditions for executing step M32 is to store the message completely.

Correspondingly, after the message M4 is stored, executing the step M33;

specifically, the "queue number" of the message M4 is used as an address to read the slave queue read status register, if the status of the slave queue read status register is "0", the slave linked list is empty, the access is made to a7, and if the status of the slave queue read status register is "1", the slave linked list is not empty, the access is made to a 8;

a7, using the address of the message fragment M41 as a value, using the queue number of the message fragment M41 as an address, respectively writing in a slave head pointer register and a slave tail pointer register, and setting the state of a slave queue reading state register to be '1';

writing the address of the message fragment M42 as a value and the address pointed by the current main tail pointer as an address into a slave linked list memory to finish the link operation; writing the slave tail pointer register with the address of the message fragment M42 as a value completes the update of the slave tail pointer register and keeps the state of the slave queue read state register set to "1";

a8, writing the address of the message fragment M41 as a value and the address pointed by the current main and tail pointer as an address into a slave linked list memory to complete the link operation; writing the slave tail pointer register with the address of the message fragment M41 as a value completes the update of the slave tail pointer register and keeps the state of the slave queue read state register set to "1";

writing the address of the message fragment M42 as a value and the address pointed by the current slave tail pointer as an address into a slave linked list memory to finish the link operation; writing to the slave tail pointer register using the address of message fragment M42 as a value completes the update of the slave tail pointer register and maintains the status of the read status register from the queue at "1".

When the message M5 arrives, in this example, the message fragments carrying eop in the message M3 are not linked, so that the message M5 needs to be linked to the slave link table and linked to the message M4, the linking mode is the same as the above-mentioned linking mode, and further description is not given here.

In addition, the execution sequence of the steps a1 to A8 is not sequential, and in the execution process of the steps a1 to A8, if the packet of the link operation performed on the main chain table is completely stored and the link operation is completed on the main chain table, the state of the status register read from the queue is read, and if the state of the status register read from the queue is 0, it indicates that no other packet is linked from the chain table, the steps a1 to A8 are executed; if the status of the slave queue read status register is 1, indicating that other messages are linked on the slave linked list, the process proceeds to step a9, and then step S34 is executed.

Continuing with fig. 3 and referring to fig. 4, a9 shows that the message M3 carries eop message fragment M32, and the receiving is completed and the linking on the main chain table is completed; at this time, the content of the slave linked list needs to be linked to the master linked list; specifically, a queue head address corresponding to the message fragment M41 stored in the head pointer register is transferred and written into the main chain table memory, and is linked to a queue tail address currently stored in the main tail pointer register corresponding to the main chain table memory;

directly transferring the information from the linked list memory to the main chain list memory; namely, keeping the message fragment M42 linked with the message fragment M41 and the message M5 linked with the message M42 in the main chain table memory;

and transferring the queue tail address corresponding to the message fragment M5 stored in the slave tail pointer register and replacing the master tail pointer register.

Further, for dequeue operations, there may be multiple queues for the same destination port; when a destination port selects a certain queue for dequeuing, a message with eop flag bit of the queue is dispatched in a fragmented manner, then reselection operation can be performed, other queues are switched to dequeue or dequeue is stopped according to flow shaping result, and simultaneously, the state of a read state register of a main queue needs to be updated; when dequeuing, the address carried by the message fragment is used to access the data memory, so that the data of the message fragment can be obtained without other extra scheduling behaviors.

Preferably, an embodiment of the present invention provides an electronic device, which includes a memory and a processor, where the memory stores a computer program that can be executed on the processor, and the processor executes the computer program to implement the steps in the scheduling method in the cut-through forwarding mode as described above.

Preferably, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the scheduling method of the cut-through forwarding mode as described above.

In summary, the scheduling method, device and storage medium in the cut-through forwarding mode of the present invention merge the physical memories of the "message data linked list" and the "information linked list", and only use one physical linked list to implement the unicast cut-through forwarding function through special enqueue and dequeue operations, and update the QoS status in real time according to the real packet length, thereby improving the accuracy of QoS, improving the system performance, and reducing the logical physical overhead. And when dequeuing, directly obtaining the fragment address of the message, and optimizing the message forwarding delay.

The above described system embodiments are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts shown as modules are logic modules, i.e. may be located in one module in the chip logic, or may be distributed to a plurality of data processing modules in the chip. Some or all of the modules may be selected according to actual needs to achieve the purpose of the embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.

The application can be used in a plurality of general-purpose or special-purpose communication chips. For example: switch chips, router chips, server chips, and the like.

It should be understood that although the present description refers to embodiments, not every embodiment contains only a single technical solution, and such description is for clarity only, and those skilled in the art should make the description as a whole, and the technical solutions in the embodiments can also be combined appropriately to form other embodiments understood by those skilled in the art.

The above-listed detailed description is only a specific description of a possible embodiment of the present invention, and they are not intended to limit the scope of the present invention, and equivalent embodiments or modifications made without departing from the technical spirit of the present invention should be included in the scope of the present invention.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种配置资源调度方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!