Data scheduling method and device for PCIE (peripheral component interface express) exchange chip port

文档序号:948192 发布日期:2020-10-30 浏览:2次 中文

阅读说明:本技术 Pcie交换芯片端口的数据调度方法及装置 (Data scheduling method and device for PCIE (peripheral component interface express) exchange chip port ) 是由 崔飞飞 张建波 赵姣 杨珂 于 2020-06-29 设计创作,主要内容包括:本公开提供了一种PCIE交换芯片端口的数据调度方法及装置,所述方法包括:根据事务包的应答类型,将PCIE交换芯片端口接收到的事务包写入所述端口的存储空间;根据存储空间中事务包的写入顺序,于预设记录队列中依次入队记录所述应答类型;确定所述存储空间中前一已取出事务包的调度阻塞状态;通过数据链路层获取链路对边设备对各所述应答类型的事务包的剩余接收量;基于所述记录队列、所述调度阻塞状态以及所述剩余接收量,对所述存储空间中事务包进行调度。本公开实施例能够保证PCIE交换芯片端口的数据调度的准确度。(The present disclosure provides a data scheduling method and device for a PCIE switch chip port, where the method includes: writing the transaction packet received by the PCIE switching chip port into the storage space of the port according to the response type of the transaction packet; sequentially enqueuing and recording the response types in a preset recording queue according to the writing sequence of the transaction packets in the storage space; determining a scheduling blocking state of a previously fetched transaction packet in the storage space; obtaining the residual receiving quantity of the link opposite side equipment to the transaction packets of each response type through a data link layer; and scheduling the transaction packets in the storage space based on the record queue, the scheduling blocking state and the residual receiving quantity. The embodiment of the disclosure can ensure the accuracy of data scheduling of the PCIE switching chip port.)

1. A data scheduling method for a PCIE switching chip port is characterized in that the method comprises the following steps:

writing the transaction packet received by the PCIE switching chip port into the storage space of the port according to the response type of the transaction packet;

sequentially enqueuing and recording the response types in a preset recording queue according to the writing sequence of the transaction packets in the storage space;

determining a scheduling blocking state of a previously fetched transaction packet in the storage space;

obtaining the residual receiving quantity of the link opposite side equipment to the transaction packets of each response type through a data link layer;

and scheduling the transaction packets in the storage space based on the record queue, the scheduling blocking state and the residual receiving quantity.

2. The method of claim 1, wherein scheduling the transaction packets in the storage space based on the record queue, the scheduling block status, and the remaining received amount comprises:

controlling the taking out of the transaction packets in the storage space based on the record queue, the scheduling blocking state and the residual receiving quantity;

and scheduling the taken transaction packet based on a preset scheduling processing rule.

3. The method of claim 2, wherein controlling the fetching of transaction packets in the storage space based on the record queue, the schedule congestion status, and the remaining received amount comprises:

controlling the taking out of the current transaction packet to be taken out in the storage space based on the recording queue and the scheduling blocking state;

and controlling the fetching of the next transaction packet to be fetched in the storage space based on the record queue and the residual receiving quantity.

4. The method of claim 3, wherein controlling the fetching of the transaction packet currently to be fetched from the storage space based on the record queue and the schedule block status comprises:

determining a first response type of the previously taken transaction packet according to the record queue;

determining a second response type of the current transaction packet to be taken out according to the record queue;

and controlling the taking out of the current transaction packet to be taken out based on the first response type, the second response type and the scheduling blocking state.

5. The method of claim 3, wherein controlling the fetching of the next transaction packet to be fetched from the storage space based on the record queue and the remaining amount of receipt comprises:

Determining a second response type of the current transaction packet to be taken out according to the record queue;

determining a third response type of the next transaction packet to be taken out according to the record queue;

and controlling the fetching of the next transaction packet to be fetched based on the second response type, the third response type and the residual receiving quantity.

6. The method of claim 2, wherein scheduling the retrieved transaction packets based on a predetermined scheduling process comprises:

screening out an information correct transaction packet from the taken transaction packets according to a preset check rule;

determining a target port corresponding to the information correct transaction packet according to preset port configuration information;

and forwarding the information correct transaction packet to the corresponding target port.

7. The method of claim 2, wherein scheduling the retrieved transaction packets based on a predetermined scheduling process comprises:

screening out and discarding transaction packets with information errors from the taken transaction packets according to a preset check rule;

registering error information corresponding to the information error transaction packet in a global error register;

And reporting the error information registered in the global error register to a host port.

8. The method of claim 2, wherein scheduling the retrieved transaction packets based on a predetermined scheduling process comprises:

if the taken transaction packet is a memory write type transaction packet, acquiring an address of the taken transaction packet;

acquiring configuration information of a multicast extension register;

and determining the multicast group to which the taken transaction packet belongs based on the address and the configuration information, and carrying out multicast processing on the taken transaction packet.

9. The method of claim 2, wherein scheduling the retrieved transaction packets based on a predetermined scheduling process comprises:

if the taken transaction packet is the transaction packet related to the downstream port request, performing authority control on the taken transaction packet;

and scheduling the taken transaction packet according to the authority control.

10. A data scheduling device for a PCIE switching chip port is characterized in that the device comprises:

the write-in module is configured to write the transaction packet received by the port of the PCIE switching chip into the storage space of the port according to the response type of the transaction packet;

The recording module is configured to sequentially enqueue and record the response types in a preset recording queue according to the writing sequence of the transaction packets in the storage space;

a determining module configured to determine a scheduling congestion status of a previously fetched transaction packet in the storage space;

the acquisition module is configured to acquire the residual receiving quantity of the link-to-side equipment to each response type transaction packet through a data link layer;

and the scheduling module is configured to schedule the transaction packet in the storage space based on the record queue, the scheduling blocking state and the residual receiving amount.

Technical Field

The disclosure relates to the field of chips, in particular to a data scheduling method and device for a PCIE switching chip port.

Background

PCIE (peripheral component interconnect Express) is a high-speed serial computer expansion bus standard, is a third generation I/O bus following a PCI bus, and is widely used in communication devices such as CPUs, video cards, and sound cards.

The response types of the transaction packet of the PCIE chip port are divided into three types, namely P (forward), NP (Non-forward), and CPL (Completion). After the request of the NP transaction packet is sent out, the response of a CPL transaction packet needs to be obtained, and the transmission is ended; after the request of the P transaction packet is sent out, the response of the CPL transaction packet does not need to be obtained. The forwarding order of the three transaction packets needs to follow the producer and consumer models and meet the ordering rule specified by the PCIE protocol.

Although there are many PCIE switch chips on the market, very little is disclosed about the application layer implementation mechanism of the PCIE switch chips.

Disclosure of Invention

One objective of the present disclosure is to provide a data scheduling method and apparatus for a PCIE switch chip port, which can ensure accuracy of data scheduling for a PC ie switch chip port.

According to an aspect of the embodiments of the present disclosure, a method for scheduling data of a PCIE switch chip port is disclosed, where the method includes:

writing the transaction packet received by the PCI E exchange chip port into the storage space of the port according to the response type of the transaction packet;

sequentially enqueuing and recording the response types in a preset recording queue according to the writing sequence of the transaction packets in the storage space;

Determining a scheduling blocking state of a previously fetched transaction packet in the storage space;

obtaining the residual receiving quantity of the link opposite side equipment to the transaction packets of each response type through a data link layer;

and scheduling the transaction packets in the storage space based on the record queue, the scheduling blocking state and the residual receiving quantity.

According to an aspect of the disclosed embodiments, a data scheduling apparatus for a PCIE switch chip port is disclosed, the apparatus includes:

the write-in module is configured to write the transaction packet received by the PCI E switch chip port into the storage space of the port according to the response type of the transaction packet;

the recording module is configured to sequentially enqueue and record the response types in a preset recording queue according to the writing sequence of the transaction packets in the storage space;

a determining module configured to determine a scheduling congestion status of a previously fetched transaction packet in the storage space;

the acquisition module is configured to acquire the residual receiving quantity of the link-to-side equipment to each response type transaction packet through a data link layer;

and the scheduling module is configured to schedule the transaction packet in the storage space based on the record queue, the scheduling blocking state and the residual receiving amount.

In the embodiment of the disclosure, for a transaction packet received by a PCIE switch chip port, the transaction packet is written into a storage space according to an acknowledgement type, and the acknowledgement type is recorded in a record queue in a queue; and then recording the queue, the scheduling blocking state of the previous taken transaction packet and the residual receiving quantity of the link-side equipment to the transaction packets of each response type, and scheduling the transaction packets in the storage space. By the method, when data scheduling is performed on the PCIE switching chip port, the equipment on two sides of the data link can be ensured to avoid deadlock, and the accuracy of data scheduling is ensured.

Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.

Drawings

The above and other objects, features and advantages of the present disclosure will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings.

Figure 1 shows a simplified hierarchy of a four-port PCIE switch chip according to one embodiment of the present disclosure.

Fig. 2 shows a schematic diagram of a global error handling module in a PCIE switch chip according to an embodiment of the present disclosure.

Fig. 3 shows a flowchart of a data scheduling method for a PCIE switch chip port according to an embodiment of the present disclosure.

Fig. 4 illustrates a detailed module composition of a port ingress side processing module according to one embodiment of the present disclosure.

Fig. 5 illustrates a detailed module composition of a port egress side processing module according to one embodiment of the present disclosure.

Fig. 6 shows a flow diagram of multicast processing according to one embodiment of the present disclosure.

Fig. 7 illustrates a detailed module composition of a port processing module according to one embodiment of the present disclosure.

Fig. 8 shows a block diagram of a data scheduling apparatus of a PCIE switch chip port according to an embodiment of the present disclosure.

Detailed Description

Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these example embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted.

Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more example embodiments. In the following description, numerous specific details are provided to give a thorough understanding of example embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, steps, and so forth. In other instances, well-known structures, methods, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.

Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.

Fig. 1 shows a schematic hierarchical structure of a four-port PCIE switch chip according to an embodiment of the present disclosure.

In this embodiment, the PCIE switch chip implements communication between devices connected to the ports by receiving, scheduling, and forwarding data between the ports through the ports. Wherein each port is peer-to-peer; each port is at the corresponding physical layer, data link layer, transaction layer and application layer, and schedules data with corresponding logic. Each port has a corresponding port processing module and is mainly used for data scheduling of an application layer; the global error processing module in the chip is shared by all ports and is mainly used for processing error information generated in the chip.

It should be noted that this embodiment is only an exemplary illustration, and does not represent that the present disclosure is only applicable to a four-port PCIE switch chip. This example should not be construed as limiting the scope of the disclosure in its function and use.

Fig. 2 shows a schematic diagram of a global error handling module in a PCIE switch chip according to an embodiment of the present disclosure.

In this embodiment, the global error handling module shared by all ports in the PCIE switch chip is mainly used to handle errors occurring in the chip. Registering the error in an error register according to the type of the error for the error; and then error reporting is carried out according to the information such as the error shielding bit, the error priority and the like stored in the error register after error state detection, and the information is reported to the host port in the form of message packets. And sending the CPL transaction packet to an exit side processing module of the source transaction packet.

It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.

Fig. 3 shows a data scheduling method for a PCIE switch chip port according to an embodiment of the present disclosure, where the method includes:

Step S210, writing the transaction packet received by the PCIE switching chip port into the storage space of the port according to the response type of the transaction packet;

step S220, sequentially enqueuing and recording the response types in a preset recording queue according to the writing sequence of the transaction packets in the storage space;

step S230, determining the scheduling blocking state of the previously taken transaction packet in the storage space;

step S240, obtaining the residual receiving quantity of the link opposite side equipment to each transaction packet of the response type through a data link layer;

step S250, scheduling the transaction packet in the storage space based on the record queue, the scheduling blocking status and the remaining receiving amount.

In the embodiment of the disclosure, for a transaction packet received by a PCIE switch chip port, the transaction packet is written into a storage space according to an acknowledgement type, and the acknowledgement type is recorded in a record queue in a queue; and then recording the queue, the scheduling blocking state of the previous taken transaction packet and the residual receiving quantity of the link-side equipment to the transaction packets of each response type, and scheduling the transaction packets in the storage space. By the method, when data scheduling is performed on the PCIE switching chip port, the equipment on two sides of the data link can be ensured to avoid deadlock, and the accuracy of data scheduling is ensured.

In the embodiment of the present disclosure, after receiving a transaction packet, a PCIE switch chip port determines a response type of the transaction packet: a P transaction package, an NP transaction package, or a CPL transaction package. Writing the transaction packet into the storage space of the port according to the response type of the transaction packet: writing the P transaction packet into a space/queue which is specially used for storing the P transaction packet in the storage space, writing the NP transaction packet into a space/queue which is specially used for storing the NP transaction packet in the storage space, and writing the CPL transaction packet into a space/queue which is specially used for storing the CPL transaction packet in the storage space.

And sequentially enqueuing and recording the response types of the transaction packets in a preset recording queue according to the writing sequence of the transaction packets in the storage space. For example: the port writes the P transaction packet, the NP transaction packet, the CPL transaction packet and the NP transaction packet into the storage space in sequence according to the response types, and then the port enqueues the record response types in the record queue in sequence to obtain a record queue 'NP- > CPL- > NP- > P' corresponding to the transaction packets. The record queue follows a first-in-first-out queue management rule.

The port determines the scheduling block status of the previously fetched transaction packet in the memory space and determines whether it is blocked.

The port acquires the residual receiving quantity of the link-to-side equipment for the transaction packets of each response type, wherein the residual receiving quantity is mainly acquired through communication of a data link layer. Specifically, the link-opposite side device feeds back the remaining receiving amount of the link-opposite side device for the transaction packets of each response type to the data link layer, so that the port obtains the remaining receiving amount of the link-opposite side device for the transaction packets of each response type, and the port can determine how many P transaction packets, how many NP transaction packets, and how many CPL transaction packets the link-opposite side device can receive. In particular, the data link flow control mechanism may be configured to measure the remaining received traffic by introducing Credit (Credit) as a unit. During initialization, a receiver sends an initialization flow control data link packet to a port on the other side of the data link, and reports the size of a cache space of the data link in credit amount; in the process of receiving the transaction packet, the updating flow control data link packet is actively and periodically sent to the port on the other side of the data link, and the number of the released credits is informed.

The port schedules the transaction packets in the storage space based on the record queue obtained by the enqueue record, the scheduling blocking state of the previously taken transaction packet in the storage space and the residual receiving quantity of the link-side device to the transaction packets of each response type.

Fig. 4 shows a detailed module composition of a port ingress side processing module according to an embodiment of the present disclosure.

In this embodiment, the port divides a RAM (Random Access Memory) into a P RAM, an NP RAM, and a CPL RAM according to the response type of the stored transaction packet. The P RAM is used for storing P transaction packages, the NP RAM is used for storing NP transaction packages, and the CPL RAM is used for storing CPL transaction packages.

And sequentially enqueuing and recording the response types according to the writing sequence of the transaction packets to obtain the enqueue sequence of the transaction packets shown in the figure. And scheduling the transaction packet according to a first-in first-out queue management rule. When a transaction packet is to be taken out, the credit amount is fed back to the data link layer through the credit management scheduling module; and then the filtering routing module carries out various processing such as rule check, target port judgment, multicast packet blocking judgment and the like on the transaction packet to be taken out.

It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.

Fig. 5 shows a detailed module composition of a port exit-side processing module according to an embodiment of the present disclosure.

In this embodiment, the egress port arbitration module of the port arbitrates the transaction packet to be forwarded. Specifically, the arbitration of the source port can be completed according to the PHASE table port arbitration policy. And the exit scheduling module forwards the taken transaction packets according to the queuing sequence of the transaction packets.

In the normal operation process, equipment on the other side of the data link periodically sends an updated flow control data link packet, and an outlet scheduling module updates the credit management of the P/NP/CPL packet according to the information; when a certain type of transaction packet is scheduled to be dequeued, the export scheduling module is responsible for refreshing credit management of the corresponding transaction packet.

It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.

In one embodiment, scheduling the transaction packets in the storage space based on the record queue, the scheduling block status and the remaining receiving amount includes:

controlling the taking out of the transaction packets in the storage space based on the record queue, the scheduling blocking state and the residual receiving quantity;

and scheduling the taken transaction packet based on a preset scheduling processing rule.

In this embodiment, the port controls the fetching of the transaction packet in the storage space based on the record queue, the scheduling blocking state and the remaining receiving amount; and then scheduling the taken transaction packet based on a preset scheduling processing rule.

In one embodiment, controlling fetching of transaction packets in the storage space based on the record queue, the schedule blocking status, and the remaining received amount comprises:

controlling the taking out of the current transaction packet to be taken out in the storage space based on the record queue and the scheduling blocking state;

and controlling the fetching of the next transaction packet to be fetched in the storage space based on the record queue and the residual receiving quantity.

In this embodiment, specifically, the port controls the fetching of the current transaction packet to be fetched from the storage space based on the record queue and the scheduling blocking state; and after the current transaction packet to be taken out is taken out, controlling to take out the next transaction packet to be taken out in the storage space based on the record queue and the residual receiving quantity.

In an embodiment, controlling the fetching of the transaction packet currently to be fetched from the storage space based on the record queue and the scheduling blocking status includes:

Determining a first response type of the previous taken transaction packet according to the record queue;

determining a second response type of the current transaction packet to be taken out according to the record queue;

and controlling the fetching of the current transaction packet to be fetched based on the first response type, the second response type and the scheduling blocking state.

In the embodiment, the port determines a first response type of a previously taken transaction packet and a second response type of a current transaction packet to be taken out according to response types recorded in sequence in the record queue; and then based on the first response type, the second response type and the scheduling blocking state of the previously taken transaction packet, controlling the taking out of the current transaction packet to be taken out.

For example: if the last fetched transaction packet is a P transaction packet and is blocked, and the current to-be-fetched transaction packet is an NP packet, the current to-be-fetched transaction packet cannot be fetched, otherwise, the model of the producer and the consumer is violated. Until the previously fetched transaction packet is no longer blocked, the current to-be-fetched transaction packet is fetched.

And if the previously taken transaction packet is an NP transaction packet and is blocked and the current transaction packet to be taken is a CPL transaction packet, taking the current transaction packet to be taken out. Because the landing of the request depends on the response, if the CPL transaction packet cannot cross the NP transaction packet, the request cannot land, and if the cache space is full, the devices at both ends of the data link are deadlocked.

In one embodiment, controlling fetching of a next to-be-fetched transaction packet in the storage space based on the record queue and the remaining receiving amount comprises:

determining a second response type of the current transaction packet to be taken out according to the record queue;

determining a third response type of the next transaction packet to be taken out according to the record queue;

and controlling the fetching of the next transaction packet to be fetched based on the second response type, the third response type and the remaining receiving quantity.

In the embodiment, the port determines a second response type of a current transaction packet to be taken out and a third response type of a next transaction packet to be taken out according to response types recorded in sequence in the record queue; and then based on the second response type, the third response type and the residual receiving quantity of the link-side equipment to the transaction packets of each response type, controlling the next transaction packet to be taken out.

For example: and if the current transaction packet to be taken out is a P transaction packet and the residual receiving quantity of the link opposite-side equipment to the P transaction packet is insufficient, and the next transaction packet to be taken out is an NP transaction packet, the next transaction packet to be taken out cannot be taken out, otherwise, the model of the producer and the model of the consumer are violated. The next to-be-fetched transaction packet may not be fetched until the current to-be-fetched transaction packet is fetched.

And if the current transaction packet to be taken out is an NP transaction packet and the residual receiving quantity of the NP transaction packet by the link opposite-side equipment is insufficient, and the next transaction packet to be taken out is a CPL transaction packet, taking out the next transaction packet to be taken out. Because the landing of the request depends on the response, if the CPL transaction packet cannot cross the NP transaction packet, the request cannot land, and if the cache space is full, the devices at both ends of the data link are deadlocked.

In an embodiment, scheduling the fetched transaction packet based on a preset scheduling process includes:

screening out an information correct transaction packet from the taken transaction packets according to a preset check rule;

determining a target port corresponding to the correct transaction packet according to preset port configuration information;

and forwarding the information correct transaction packet to the corresponding target port.

In this embodiment, the port checks the extracted transaction packet according to a preset check rule, and screens out the transaction packet with correct information. The process mainly involves the determination of information such as ECRC (End-to-End CRC)/address space/ID space/poison bit (data bit for indicating whether the transaction packet has an ECC error or parity error)/packet header.

The port determines a target port corresponding to the information correct transaction packet according to the port configuration information for the screened information correct transaction packet, that is, determines to which port the information correct transaction packet should be forwarded from the current port. The process mainly involves address/ID/implicit route determination. And the port forwards the information correct transaction packet to the target port.

In an embodiment, scheduling the fetched transaction packet based on a preset scheduling process includes:

screening out and discarding transaction packets with information errors from the taken transaction packets according to a preset check rule;

registering error information corresponding to the information error transaction packet in a global error register;

and reporting the error information registered in the global error register to a host port.

In this embodiment, the port checks the extracted transaction packet according to a preset check rule, and filters and discards the transaction packet with the information error. And then registering error information corresponding to the information error transaction packet in a global error register, and reporting the registered error information to a host port.

The global error register refers to an error register located globally by the PCIE switch chip. Typically, rather than being located solely within a single port, the global error register is accessible by either port in the chip.

The global error register can register error information corresponding to the information error transaction packet, and can also register other error information in the global data scheduling process of the PCIE chip to report the error information to the host port, for example, an error corresponding to a routing error in each port, and error information corresponding to an authority processing error.

In an embodiment, scheduling the fetched transaction packet based on a preset scheduling process includes:

if the taken transaction packet is a memory write type transaction packet, acquiring the address of the taken transaction packet;

acquiring configuration information of a multicast extension register;

and determining the multicast group to which the taken transaction packet belongs based on the address and the configuration information, and carrying out multicast processing on the taken transaction packet.

In this embodiment, if the fetched transaction packet is of the memory write type, the port performs multicast processing on the fetched transaction packet. Similarly, if the extracted transaction packet is a data-carrying message packet, the port performs multicast processing on the extracted transaction packet.

Specifically, referring to fig. 6: for the message packet which is of a memory write type or carries data, the port determines whether multicast is enabled or not from the configuration information of the multicast extension register; if the multicast is enabled, determining whether the multicast index information, the multicast group information and the multicast maximum group information accord with rules or not according to the configuration information; if the multicast base address and the multicast group information are all in accordance with the rule, acquiring a multicast base address from the configuration information, and further determining whether the address of the taken transaction packet is in the address range of the multicast packet or not based on the multicast base address, the address of the taken transaction packet and the multicast group information; if the address of the transaction packet is in the range, determining a multicast group to which the address of the taken transaction packet belongs based on the multicast base address, the address of the taken transaction packet and the multicast index information; and further judging whether the multicast group is blocked or not, and further carrying out multicast processing on the taken transaction packet based on a blocking judgment result. In the process of multicast processing, the multicast address substitution and the multicast packet replication are completed by the configuration information of the multicast extension registers of other ports.

It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.

In an embodiment, scheduling the fetched transaction packet based on a preset scheduling process includes:

if the taken transaction packet is the transaction packet related to the downstream port request, performing authority control on the taken transaction packet;

and scheduling the fetched transaction packet according to the authority control.

In this embodiment, if the fetched transaction packet is a transaction packet related to a downstream port request (for example, a transaction packet from a downstream port to a downstream port, a transaction packet from a downstream port to an upstream port, or a transaction packet routed by the downstream port accessing itself), the port performs authority control on the fetched transaction packet. The port then schedules the retrieved transaction packet according to the result of the authority control. The authority control includes but is not limited to source authentication, transmission blocking, and request redirection.

Fig. 7 shows a detailed module composition of a port processing module of an embodiment of the present disclosure.

In this embodiment, an entry-side processing module of the port 0 writes the received transaction packets into the RAM according to the response type, and records the queue entry sequence of the transaction packets in the record queue; in the port operation process, the credit management scheduling module feeds back the credit to the data link layer according to the data stored in the RAM and the queuing sequence of the transaction packets; the filtering and routing module further performs various processing such as rule check, target port judgment, multicast packet blocking judgment (the judgment can be performed according to feedback of the multicast packet matching blocking module) and the like on the transaction packet to be taken out, wherein the filtering and routing module needs to interact with the register module in part of the processing process; the authority control processing module carries out authority control processing on the transaction packet related to the request from the downstream port, carries out authority control processing and multicast processing on the transaction packet hitting the multicast packet space, and sends the successfully processed transaction packet and the request transaction packet from the upstream port to the exit direction decision module together with the destination port information for processing; the transaction packets with the filtered routing errors, the transaction packets with the failed authority control processing and the transaction packets which do not hit the multicast packet space are sent to the exit direction decision module for processing through the global error processing.

And an output port arbitration module of the output port side processing module of the port 0 arbitrates the transaction packet to be forwarded. Specifically, the arbitration of the source port can be completed according to the PHASE table port arbitration policy. And the exit scheduling module forwards the taken transaction packets according to the queuing sequence of the transaction packets.

Similarly, as a port equivalent to port 0, detailed module composition of the port processing module related to port 1 is not described again.

It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.

Fig. 8 shows a data scheduling apparatus for a PCIE switch chip port according to an embodiment of the present disclosure, where the apparatus includes:

a write-in module 310, configured to write the transaction packet received by the PCIE switch chip port into the storage space of the port according to the response type of the transaction packet;

a recording module 320 configured to sequentially enqueue and record the response types in a preset recording queue according to a writing sequence of the transaction packets in the storage space;

a determining module 330 configured to determine a scheduling block status of a previously fetched transaction packet in the storage space;

an obtaining module 340 configured to obtain a remaining receiving amount of the link-to-edge device for each transaction packet of the response type;

A scheduling module 350 configured to schedule the transaction packets in the storage space based on the record queue, the scheduling blocking status, and the remaining receiving amount.

In an exemplary embodiment of the disclosure, the apparatus is configured to:

controlling the taking out of the transaction packets in the storage space based on the record queue, the scheduling blocking state and the residual receiving quantity;

and scheduling the taken transaction packet based on a preset scheduling processing rule.

In an exemplary embodiment of the disclosure, the apparatus is configured to:

controlling the taking out of the current transaction packet to be taken out in the storage space based on the recording queue and the scheduling blocking state;

and controlling the fetching of the next transaction packet to be fetched in the storage space based on the record queue and the residual receiving quantity.

In an exemplary embodiment of the disclosure, the apparatus is configured to:

determining a first response type of the previously taken transaction packet according to the record queue;

determining a second response type of the current transaction packet to be taken out according to the record queue;

and controlling the taking out of the current transaction packet to be taken out based on the first response type, the second response type and the scheduling blocking state.

In an exemplary embodiment of the disclosure, the apparatus is configured to:

determining a second response type of the current transaction packet to be taken out according to the record queue;

determining a third response type of the next transaction packet to be taken out according to the record queue;

and controlling the fetching of the next transaction packet to be fetched based on the second response type, the third response type and the residual receiving quantity.

In an exemplary embodiment of the disclosure, the apparatus is configured to:

screening out an information correct transaction packet from the taken transaction packets according to a preset check rule;

determining a target port corresponding to the information correct transaction packet according to preset port configuration information;

and forwarding the information correct transaction packet to the corresponding target port.

In an exemplary embodiment of the disclosure, the apparatus is configured to:

screening out and discarding transaction packets with information errors from the taken transaction packets according to a preset check rule;

registering error information corresponding to the information error transaction packet in a global error register;

and reporting the error information registered in the global error register to a host port.

In an exemplary embodiment of the disclosure, the apparatus is configured to:

if the taken transaction packet is a memory write type transaction packet, acquiring an address of the taken transaction packet;

acquiring configuration information of a multicast extension register;

and determining the multicast group to which the taken transaction packet belongs based on the address and the configuration information, and carrying out multicast processing on the taken transaction packet.

In an exemplary embodiment of the disclosure, the apparatus is configured to:

if the taken transaction packet is the transaction packet related to the downstream port request, performing authority control on the taken transaction packet;

and scheduling the taken transaction packet according to the authority control.

Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种IO链路故障切换方法、系统、终端及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!