Data transmission device, data transmission method and chip

文档序号:1923573 发布日期:2021-12-03 浏览:15次 中文

阅读说明:本技术 数据传输装置、数据传输方法及芯片 (Data transmission device, data transmission method and chip ) 是由 龚源泉 徐子轩 夏杰 于 2021-09-18 设计创作,主要内容包括:本申请提供了一种数据传输装置、数据传输方法及芯片,应用于将M个源接口的多个数据传输至N个目的接口,包括:M个源接口存储器,用于存储所述多个数据;锁存寄存器,用于存储每一源接口存储器对应于每一目的接口分别预取的N个数据;M个第一调度器,被配置为用于通过轮询调度的方式实现每一源接口存储器的N个数据的预取;以及N个第二调度器,被配置为将存储于所述锁存寄存器中的数据,通过轮询调度的方式分发至各目的接口;通过数据预取锁存及两级调度逻辑的隔离,实现了第一调度器和第二调度器可以同时运行且相互之间没有制约关系,数据传输装置结构清晰,逻辑简单,优化了芯片时序,利于后端实现。(The application provides a data transmission device, a data transmission method and a chip, which are applied to transmitting a plurality of data of M source interfaces to N destination interfaces, and comprise the following steps: m source interface memories for storing the plurality of data; the latch register is used for storing N data which are respectively prefetched by each source interface memory corresponding to each destination interface; m first schedulers configured to implement prefetching of N data per source interface memory by round robin scheduling; the N second schedulers are configured to distribute the data stored in the latch register to each destination interface in a polling scheduling mode; through data prefetching latching and isolation of two-stage scheduling logic, the first scheduler and the second scheduler can operate simultaneously and have no restriction relation with each other, the data transmission device is clear in structure and simple in logic, chip time sequence is optimized, and back-end implementation is facilitated.)

1. A data transfer apparatus for transferring a plurality of data from M source interfaces to N destination interfaces, comprising:

m source interface memories for storing the plurality of data;

the latch register is used for storing N data which are respectively prefetched by each source interface memory corresponding to each destination interface;

m first schedulers configured to implement prefetching of N data per source interface memory by round robin scheduling; and

the N second schedulers are configured to distribute the data stored in the latch register to each destination interface in a polling scheduling mode;

wherein M, N are integers, M is more than or equal to 2, and N is more than or equal to 1.

2. The data transmission apparatus of claim 1, wherein the source interface memory has a partition corresponding to each of the destination interfaces, the source interface memory configured to:

sending a first sending request to the first scheduler based on the partitions, wherein the first scheduler responds to a request of a certain partition in a polling scheduling mode and prefetches data stored in the partition to the corresponding latch register, and if the data stored in the latch register corresponding to the partition is invalid, the first sending request is set to be 1; otherwise, clearing 0 the first sending request, and only the partition with the first sending request set to 1 can participate in the polling scheduling of the first scheduler, and all the source interface memories are operated and executed in parallel.

3. The data transmission apparatus according to claim 2, wherein the source interface is capable of sending a second sending request to each destination interface corresponding to the data latched in the latch register by the source interface, and the second scheduler is capable of selecting one source interface sending the second sending request by round robin and sending the data stored in the latch register corresponding to the source interface to the destination interface corresponding to the second scheduler.

4. The data transmission apparatus according to claim 3, wherein the destination interface is capable of sending a third send request to the latch register to which the second send request is sent, and in response to the second send request of the latch register, setting the third send request of the destination interface corresponding to the selected source interface to 1, and the operation is performed in parallel by all the second schedulers.

5. The data transmission apparatus of claim 4, wherein the latch register is configured to: and when the third sending request received by the corresponding source interface from the destination interface is 1, clearing the second sending request sent to the corresponding destination interface.

6. A data transmission method for transmitting data by using the data transmission apparatus as claimed in any one of claims 2 to 5, comprising the steps of:

s1, prefetching a datum for each source interface memory corresponding to each destination interface and latching the datum in the latch register;

s2, retrieving a data from the latch register for each of the destination interfaces in a clock cycle by the second scheduler in a round robin manner;

s3, pre-fetching data for the latch register in step S2 from the corresponding partition of the source interface memory in a polling scheduling manner by the first scheduler.

7. A data transmission method is applied to transmit a plurality of data of M source interfaces to N destination interfaces, and is characterized by comprising the following steps:

a data pre-fetching step: respectively prefetching N data corresponding to each destination interface based on each source interface and latching the data in a latch register;

a first scheduling step: calling one data for each destination interface in a polling scheduling mode for the data latched in the latch register;

a second scheduling step: prefetching at most N data for the latch register by polling scheduling to fill in the vacancy of the data to be scheduled in the first scheduling step.

8. The data transmission method according to claim 7, wherein each of the source interfaces is capable of sending a second sending request to the destination interfaces corresponding to the data latched in the latch register, and in the first scheduling step, one of the source interfaces sending the second sending request is selected by round robin, and the data stored in the latch register corresponding to the source interface is sent to the destination interface corresponding to the second scheduler.

9. The data transmission method according to claim 7, wherein the destination interface is capable of sending a third send request to the latch register to which the second send request is sent, and in the second scheduling step, data is prefetched from the corresponding source interface corresponding to the destination interface in response to the third send request received by the corresponding source interface being 1.

10. A chip, characterized in that the data transmission device of any one of claims 1-5 or the data transmission method of any one of claims 6-9 is applied.

Technical Field

The present invention relates to the field of network communication technologies, and in particular, to a data transmission device, a data transmission method, and a chip.

Background

In some application scenarios, the chip scale is getting larger and larger, and particularly in the fields of artificial intelligence, network switching and the like, a large amount of data needs to be classified, stored and processed. In these chips, a problem is often encountered in that data from multiple interfaces needs to be transmitted to different multiple interfaces. Assuming that there are m input interfaces, each of which alternately outputs data of the same size to be transmitted to n destination interfaces, each data includes a destination interface number, i.e., indicates to which destination interface to transmit.

As shown in fig. 1, how to select from the source interfaces in each clock cycle, output at most one data to each destination interface, and ensure that the source interfaces obtain fair scheduling opportunities is a problem that must be solved in chip design. Meanwhile, the design method for scheduling the source interface directly influences the time sequence and the performance of the chip.

And partitioning the data of each source interface according to the destination interface by using a memory, and storing the data transmitted to each destination interface. As shown in fig. 2, the memory will be referred to as a source interface memory hereinafter for convenience of description. Selecting a first source interface memory by adopting a polling mode, selecting information of a target interface from the first source interface memory by adopting the polling mode, and assuming that the selected target interface is a; selecting a second source interface memory in a polling mode, selecting information of a target interface which is not a from the second source interface memory in the polling mode, and assuming that the selected target interface is b; and selecting a third source interface memory in the same method and selecting a destination interface which is different from a and b from the third source interface memory, wherein the selected destination interface is assumed to be c. And so on until all n destination interfaces are selected. The method needs to carry out twice scheduling in each step, and selects the source interface memory firstly and then selects the destination interface. In order to ensure fairness and selection efficiency, it must be ensured that the source interface memory selected at each step stores data of destination interfaces different from the destination interfaces already selected. Thus, the more the following steps are, the more the selection of the source interface memory and the selection of the destination interface from the source interface memory are restricted by the selected destination interface, and the more complicated the logic becomes.

The most restrictive conditions are applied when the final source interface memory and the final destination interface are selected, and the logic is the most complicated. When the number of n becomes large, the timing of combinational logic on a chip implementation may become poor. Meanwhile, due to the fact that the front interface and the rear interface are mutually associated in selecting behaviors, the front interface and the rear interface need to be divided into a plurality of stages of pipelines to realize the selection, the selection is irregular and can be circulated, control logic is complex, and delay is increased.

Therefore, there is a need for an improvement in the conventional data transmission apparatus and data transmission method.

Disclosure of Invention

In view of the above technical problems in the prior art, the present application provides a data transmission apparatus for transmitting a plurality of data of M source interfaces to N destination interfaces, including: m source interface memories for storing the plurality of data; a latch register for storing N data prefetched from each source interface memory corresponding to each destination interface respectively; m first schedulers configured to implement prefetching of N data per source interface memory by round robin scheduling; the N second schedulers are configured to distribute the data stored in the latch register to each destination interface in the same clock cycle in a polling scheduling mode; wherein M, N are integers, M is more than or equal to 2, and N is more than or equal to 1.

Optionally, the source interface memory has a partition corresponding to each of the destination interfaces, the source interface memory being configured to:

sending a first sending request to the first scheduler based on the partitions, wherein the first scheduler responds to a request of a certain partition in a polling scheduling mode and prefetches data stored in the partition to the corresponding latch register, and if the data stored in the latch register corresponding to the partition is invalid, the first sending request is set to be 1; otherwise, clearing 0 the first sending request, and only the partition with the first sending request set to 1 can participate in the polling scheduling of the first scheduler, and all the source interface memories are operated and executed in parallel.

Optionally, the source interface may send a second sending request to each destination interface corresponding to the data latched in the latch register by the source interface, and the second scheduler may select one source interface that sends the second sending request by a round-robin scheduling method, and send the data stored in the latch register corresponding to the source interface to the destination interface corresponding to the second scheduler.

Optionally, the destination interface may send a third send request to the latch register to which the second send request is sent, and in response to the second send request of the latch register, set the third send request of the destination interface corresponding to the selected source interface to 1, where all the second schedulers perform the operation in parallel.

Optionally, the latch register is configured to: and when the third sending request from the destination interface received by the corresponding source interface is 1, clearing a second sending request corresponding to the source interface and the destination interface sending the third sending request.

In order to achieve the above object, the present application provides a data transmission method, which applies the above data transmission apparatus to transmit data, including the steps of:

s1, prefetching a datum for each source interface memory corresponding to each destination interface and latching the datum in the latch register;

s2, retrieving a data from the latch register for each of the destination interfaces in a clock cycle by the second scheduler in a round robin manner;

s3, pre-fetching data for the latch register in step S2 from the partition of the corresponding source interface memory in a polling scheduling manner by the first scheduler.

In order to achieve the above object, the present application provides a data transmission method, which is applied to transmit a plurality of data of M source interfaces to N destination interfaces, and includes the steps of:

a data pre-fetching step: respectively prefetching N data corresponding to each destination interface based on each source interface and latching the data in a latch register;

a first scheduling step: calling one data for each destination interface in a polling scheduling mode for the data latched in the latch register;

a second scheduling step: prefetching at most N data for the latch register by polling scheduling to fill in the vacancy of the data to be scheduled in the first scheduling step.

Optionally, each source interface may be configured to send a second sending request to each destination interface corresponding to the data latched in the latch register by the source interface, and in the first scheduling step, one source interface that sends the second sending request may be selected in a round-robin manner, and the data stored in the latch register corresponding to the source interface is sent to the destination interface corresponding to the second scheduler.

Optionally, the destination interface may send a third sending request to the latch register to which the second sending request is sent, and in the second scheduling step, in response to that the third sending request received by the corresponding source interface is 1, prefetch data from the corresponding source interface corresponding to the destination interface.

In order to achieve the above object, the present application provides a chip, which applies the data transmission apparatus or the data transmission method described above.

The data transmission device and the data transmission method provided by the application have the following advantages at least:

through data prefetching latching and isolation of two-stage scheduling logic, the first scheduler and the second scheduler can operate simultaneously and have no restriction relation with each other, the data transmission device is clear in structure and simple in logic, chip time sequence is optimized, and back-end implementation is facilitated.

Drawings

FIG. 1 is a schematic diagram of an interface data transmission in the prior art;

FIG. 2 is a schematic diagram of a data storage method in the prior art;

fig. 3 is a schematic structural diagram of a data transmission device according to an embodiment of the present application;

FIG. 4 is a schematic diagram of a source interface of the data transmission apparatus shown in FIG. 3;

FIG. 5 is a schematic diagram of the source interface and the first destination interface of the data transmission apparatus in FIG. 3;

fig. 6 is a schematic flowchart of a data transmission method provided in an embodiment of the present application;

fig. 7 is a flowchart illustrating another data transmission method provided in an embodiment of the present application.

Detailed Description

Hereinafter, exemplary embodiments will be described in detail with reference to the accompanying drawings. However, the present application is not limited to the following embodiments, but includes various changes, substitutions, and alterations within the technical scope of the present disclosure. The terms "first," "second," and the like may be used to explain various elements, the number of elements is not limited by such terms. These terms are only used to distinguish one element from another. Thus, an element referred to as a first element in one embodiment may be referred to as a second element in another embodiment. The singular forms "a", "an" and "the" do not exclude the plural forms unless the context requires otherwise. The following description of the embodiments of the present application is provided for illustrative purposes, and other advantages and capabilities of the present application will become apparent to those skilled in the art from the present disclosure.

The embodiment provides a data transmission device and a data transmission method suitable for scheduling a plurality of interface data, which are realized by chip design, and are used for solving the technical problems of complex data scheduling control logic and large delay in the prior art.

The data transmission apparatus 100 provided in this embodiment is applied to transmit a plurality of data of M source interfaces to N destination interfaces, where M ≧ 2 and N ≧ 1, for convenience of description, M ═ 3 and N ═ 2 are taken as examples hereinafter.

As shown in fig. 3, the data transmission apparatus 100 includes: three source interfaces 20, each source interface 20 comprising a source interface memory 10 for storing a plurality of data for each source interface 20; a latch register 30 for storing two data prefetched by each source interface memory 10 corresponding to two destination interfaces 40 respectively.

Alternatively, as shown in fig. 3 and 4, each source interface memory 10 is partitioned based on the destination interface 40 of the data stored therein, so as to facilitate prefetching of the data of the source interface memory 10.

Specifically, the three source interface memories 10 are a first source interface memory 11, a second source interface memory 12 and a third source interface memory 13, respectively, the destination interface 40 includes a first destination interface 41 and a second destination interface 42, and each source interface memory 10 has a first partition 01 corresponding to the first destination interface 41 and a second partition 02 corresponding to the second destination interface 42.

The first source interface memory 11 stores data a and data B corresponding to the first destination interface 41, and data C and data D corresponding to the second destination interface 42; the second source interface memory 12 stores data E and data F corresponding to the first destination interface 41, and data G and data H corresponding to the second destination interface 42; the third source interface memory 13 stores data I and data J corresponding to the first destination interface 41 and data K and data L corresponding to the second destination interface 42.

The "two data prefetched by each source interface memory 10 corresponding to two destination interfaces 40" may be specifically:

the latch register 30 stores data a, data C, data E, data G, data I, and data K.

Optionally, the data transmission apparatus 100 includes three first schedulers 50 configured to store the data in the three source interface memories 10 to the latch register 30 by means of round robin scheduling.

Alternatively, the first partition 01 and the second partition 02 of each source interface memory 10 can send a request (hereinafter referred to as a first send request) to the latch register 30, and if a partition (e.g., the first partition 01 of the first source interface memory 11) corresponding to the data stored on the latch register 30 is invalid, the first send request corresponding to the partition may be set to 1; otherwise, clearing the first sending request. Only the partition with the first sending request 1 can participate in the polling scheduling, and all the source interface memories 10 are executed in parallel in the present operation, so that the data prefetching of each source interface memory 10 for the destination interface 40 can be realized.

Alternatively, in this embodiment, as shown in fig. 5, a second scheduler 60 is provided for each destination interface 40, and the latch register 30 can send a request (hereinafter referred to as a second sending request) to each destination interface 40 corresponding to the data latched by the latch register; correspondingly, the destination interface 40 is able to send a request (hereinafter referred to as a third send request) to the latch register 30 to which the second send request is sent.

The second schedulers 60 can collect the latched data from different source interfaces 20 and the second transmission requests transmitted to the corresponding destination interfaces 40, each second scheduler 60 can theoretically receive at most three second transmission requests (corresponding to three source interface memories 10, respectively), each second scheduler 60 selects one source interface 20 transmitting the second transmission request by means of round-robin, the data stored in the latch register 30 corresponding to the source interface 20 is transmitted to the destination interface 40 corresponding to the second scheduler 60, at the same time, the third transmission request of the destination interface 40 corresponding to the selected source interface 20 is set to 1, the destination interface 40 having the third transmission request of 1 indicates that the data from the latch register 30 can be received in the clock cycle, the operation is executed by all the second schedulers 60 in parallel, and thus, in each clock cycle, each destination interface 40 has a data output.

Alternatively, for each first scheduler 50, when the third transmission request received by the corresponding source interface 20 is 1, the second transmission request of the latch corresponding to the third transmission request is cleared, and meanwhile, if there is still data in the partition corresponding to the latch register 30, the first transmission request of the partition is reset to 1, the partition may participate in polling scheduling again and read and latch data from the corresponding area of the source interface memory 10, and thus, the data can be prefetched from the source interface memory 10 again after the data in the latch register 30 is transferred to the destination interface 40.

The data transmission apparatus 100 provided in this embodiment has at least the following advantages:

through data prefetching latching and isolation of two-stage scheduling logic, the first scheduler 50 and the second scheduler 60 can operate simultaneously without restriction relation, and the data transmission device 100 is clear in structure, simple in logic, optimized in chip timing sequence and beneficial to back-end implementation.

Optionally, as shown in fig. 6, this embodiment further provides a data transmission method, which is applied to distribute a plurality of data from M source interfaces 20 to N destination interfaces 40, and includes the steps of:

a data pre-fetching step: prefetching N data and latching in the latch register 30 based on each source interface 20 corresponding to each destination interface 40 respectively;

a first scheduling step: calling one data for each destination interface 40 by a polling scheduling mode for the data latched in the latch register 30;

a second scheduling step: a maximum of N data are prefetched for the latch register 30 by means of round robin scheduling to fill in the gaps in the data that were dispatched in the first scheduling step.

Alternatively, each source interface 20 can send a second transmission request to each destination interface 40 corresponding to the data latched in the latch register 30, and in the first scheduling step, one source interface 20 sending the second transmission request can be selected in a polling scheduling manner, and the data stored in the latch register 30 and corresponding to the source interface 20 is sent to the destination interface 40 corresponding to the second scheduler 60.

Alternatively, the destination interface 40 can send a third send request to the latch register 30 to which the second send request is sent, and in the second scheduling step, data is prefetched from the corresponding source interface 20 corresponding to the destination interface 40 in response to the third send request received by the corresponding source interface 20 being 1.

Optionally, as shown in fig. 7, the present embodiment further provides a data transmission method, which is applied to schedule data by using the data transmission apparatus 100 described above.

Optionally, the data transmission method includes the steps of:

s1, prefetching a data for each source interface memory 10 corresponding to each destination interface 40 and latching the data in the latch register 30;

s2, calling a data from the latch register 30 for each destination interface 40 in a clock cycle by the second scheduler 60 in a polling scheduling manner;

s3, pre-fetching data for the latch register 30 in step S2 from the corresponding partition of the source interface memory 10 by the first scheduler 50 in a round robin manner.

The data transmission method provided by the embodiment realizes data transmission through data expectation and two-stage independent scheduling, has simple logic and easy implementation, and optimizes the chip time sequence.

The specific implementation manner of step S2 and step S3 may refer to the above description of the data transmission device 100 and the data transmission method, and will not be described herein again.

Optionally, the present embodiment further provides a chip (not shown) applying the data transmission apparatus 100 or the data transmission method described above.

The above embodiments are merely illustrative of the principles and utilities of the present application and are not intended to limit the application. Any person skilled in the art can modify or change the above-described embodiments without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical concepts disclosed in the present application shall be covered by the claims of the present application.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:存储器装置和包括存储器装置的存储器模块

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!