Elastic shared cache architecture for on-chip message processing

文档序号:1141157 发布日期:2020-09-11 浏览:10次 中文

阅读说明:本技术 一种用于片内报文处理的弹性共享缓存架构 (Elastic shared cache architecture for on-chip message processing ) 是由 杨惠 李韬 熊智挺 吕高锋 赵国鸿 毛席龙 冯振乾 全巍 刘汝霖 李存禄 于 2020-06-28 设计创作,主要内容包括:本发明涉及一种用于片内报文处理的弹性共享缓存架构,解决现有高性能网络处理器存储技术存在的网络带宽的大幅浪费、报文处理延时等问题。架构,包括三部分,分别是报文缓存区管理、描述符管理、中断管理,分别用于实现报文缓存区管理、描述符管理、中断管理功能。该架构支持不等长报文的弹性存储,支持网络接口收发报文快速处理路径和片上多核CPU的直接访问,同时支持基于轮询的报文传输处理和有中断的报文传输处理,使报文的存储及发送尽可能的减少对CPU的打扰,实现高效的软硬件交互,以满足高性能网络处理器芯片的设计需求。(The invention relates to an elastic shared cache architecture for on-chip message processing, which solves the problems of great waste of network bandwidth, message processing delay and the like in the existing high-performance network processor storage technology. The architecture comprises three parts, namely message cache region management, descriptor management and interrupt management, which are respectively used for realizing the functions of message cache region management, descriptor management and interrupt management. The framework supports the elastic storage of messages with unequal lengths, supports the rapid processing path of receiving and sending messages by a network interface and the direct access of a multi-core CPU on a chip, and simultaneously supports the message transmission processing based on polling and the message transmission processing with interruption, so that the disturbance to the CPU is reduced as much as possible in the storage and sending of the messages, the high-efficiency software and hardware interaction is realized, and the design requirement of a high-performance network processor chip is met.)

1. An elastic shared cache architecture for on-chip message processing is characterized by comprising three parts, namely message cache region management, descriptor management and interrupt management, which are respectively used for realizing the functions of the message cache region management, the descriptor management and the interrupt management;

the management of the message buffer area realizes the dynamic adjustment of the number of each storage unit corresponding to different message lengths by configuring a block capacity counter,

logically, the storage areas with different message lengths are organized into the forms with the same block number but different block sizes,

the size of the capacity of each block can be recorded by a CPU matched with a counting register group, when software resets the value, the division of the variable storage space is realized,

on a storage space recovery mechanism, when a message is sent, sequentially adding the value of an address release counter until the value of the address release counter reaches a threshold value set by software, recovering the address of the current block, and writing a base address into an address recovery queue corresponding to a message length storage space;

the descriptor management, the descriptor queue comprises the management of 2 types of descriptors, namely a network interface receiving/sending descriptor queue and a message receiving/sending descriptor queue constructed by a CPU;

the interrupt management supports two modes of message transmission and processing, the first mode is message transmission processing and sending based on polling, the interrupt enable is turned off, and once the DM detects that a message is ready to be transmitted, processed or sent in the cache region, the processing and sending processes are automatically completed without generating interrupt; and the second is transmission processing and sending with interruption, which is compatible with the traditional message processing flow with interruption, when the IM detects that the counter reaches the value set by the threshold register, an interruption signal is generated and sent to the on-chip interruption controller for corresponding processing.

2. The architecture of claim 1, wherein the cache memory is configured to store the cache memory address,

the distribution of the storage space managed by the message buffer area is that when data is sent by a network interface and written into the storage space, the address distribution logic judges to obtain a base address from an address queue of the corresponding message length storage space according to the length type of a currently entered message, and calculates the corresponding storage position of the message by combining with a distribution counter, so that the storage and the recovery of the elastic buffer are realized, and the recovery logic and the recovery queue are independently realized by a plurality of storage spaces.

3. The architecture of claim 1, wherein the cache memory is configured to store the cache memory address,

the descriptor management, two kinds of queues adopt the organization mode of the ring linked list; in order to distinguish the priority of the network interface for receiving and sending messages, the first type descriptor queue is also distinguished by high and low priorities, the high and low priority descriptor management realizes the management of 3 receiving/sending queue descriptors, and the message descriptor rings are respectively constructed for a high priority descriptor ring, a low priority descriptor ring and a CPU.

4. The flexible shared cache architecture for on-chip message processing according to claim 3,

the management of the high priority descriptor ring and the low priority descriptor ring is based on three pointers, namely a write descriptor pointer rx for indicating the position of the report message header when data is uplink, a read descriptor pointer tx for indicating the position of the report message header when the data is downlink, and a software processing pointer p for indicating the position of the message header processed by the CPU, wherein if rx is not equal to p, the management indicates that a message needs to be sent to the CPU for processing, and if p is not equal to tx, the management indicates that a message needs to be sent by a network interface.

5. The architecture of claim 3, wherein the CPU is configured to manage the message descriptor ring based on two pointers, a read descriptor pointer tx for data downlink, a location for indicating the header of the HW sent message, and a software processing pointer p, wherein the interval of p → tx is a free message buffer,

when the count of the newly received message exceeds the threshold register, an interrupt is generated and sent to the on-chip CPU.

6. The elastic shared cache architecture for on-chip message processing according to claim 4 or 5, wherein the data uplink refers to: the network interfaces to the Packet RAM,

the data downlink refers to: packet RAM to network interface.

7. The architecture of claim 1, wherein the flow of processing the packet by the elastic shared cache on the network interface is as follows:

1) the RECV-PKT module receives a message from a network interface, and carries metadata;

2) reading an idle buffer ID (a BID), and writing a message into an RAM through a data bus according to the BID;

3) filling BID in Metadata, and sending the Metadata to a HW _ WRITE _ DES module;

4) the HW _ WRITE _ DES module WRITEs the descriptor into a descriptor queue;

5) and updating the message counter of the corresponding CPU core, and if the message counter exceeds a threshold value, generating interruption.

8. The flexible shared cache architecture for on-chip message processing according to claim 1, wherein the CPU issues a message processing flow as follows:

1) after the CPU finishes processing the message or generates the message, updating the descriptor queue;

2) HW _ READ _ DES polling description queue, if finding there is a message to send, READ the descriptor information;

3) after the HW _ READ _ DES reading description, sending the description to a SEND _ PKT module;

4) the SEND _ PKT reads the message from the RAM according to the BID in the metadata and recovers the BID;

5) and the SEND _ PKT transmits the read message and the metadata to a network interface.

Technical Field

The invention mainly relates to the field of high-performance network processor chip design, in particular to an elastic shared cache architecture for efficient message processing in a chip.

Background

With the rapid development of deep submicron technology, a high performance Multi-processor system-on-Chip (MPSoC) has been widely used in various application fields, such as signal processing systems, streaming media processing, network processing, and the like. The heterogeneous MPSoC integrates a plurality of processor cores on a single chip, and connects resources such as external interfaces, memories, hardware acceleration components and the like with one another through a high-speed on-chip interconnection network to form a multi-core parallel processing system structure. With the advance of process nodes, in 14nm and more advanced processes, the area of a transistor and an on-chip memory cell is remarkably reduced, power consumption is remarkably reduced, the transistor and the on-chip memory cell are no longer bottlenecks in chip design, and even enter a 'relative excess' state to some extent.

Network processors have evolved from the first generation, which is medium and low end switch-oriented routing, to the second generation, which employs a unified external interface standard, to the third generation, which employs on-chip multi-core/many-core, high-speed storage integration. With the continuous abundance of hardware resources of network processors, how to reasonably and efficiently implement hardware resources such as on-chip storage architectures becomes a major bottleneck that restricts the network processors from performing high-performance service processing. Memory wall is a constant challenge for high performance computing, and the memory wall problem for network processors is also very serious. The on-chip memory structure directly determines the throughput performance and capacity of the message. Because the stream type access and storage characteristics of network message processing are obviously different from the traditional processor in the aspects of time and space locality, the message-oriented stream type access and storage characteristics need to be explored for the purpose, an on-chip high-efficiency cache system which is efficiently adaptive to the network message processing flow needs to be explored, a message access and storage channel is optimized, storage access with low delay and good certainty is provided for the network processor, and the basic premise that the network message is processed at a line speed is guaranteed.

In actual network traffic processing, the message length is variable and is not of a definite length, and the traditional chained storage message can bring about great improvement of the complexity of design and the cost of chained address storage. On-chip cache is divided into storage area blocks with fixed length, which causes great waste of storage space, for example, the cache area is divided into equal length and address management according to 2K space, and when the message size is 64B, the utilization rate of the storage space is greatly reduced. On the other hand, after receiving the message, the network interface stores the message into a dedicated on-chip cache area, and when the CPU deeply processes the message, the CPU needs to move the message from the message cache area to an address space accessible to the CPU through a high-speed on-chip network, which causes a large waste of network bandwidth on chip and increases the processing delay of the message.

Disclosure of Invention

The invention aims at the problems of large waste of network bandwidth, message processing delay and the like in the existing high-performance network processor storage technology, and particularly provides an elastic shared cache architecture which supports elastic storage of messages with unequal lengths, supports a rapid message receiving and sending processing path of a network interface and direct access of an on-chip multi-core CPU, and supports polling-based message transmission processing and interrupted message transmission processing, so that the storage and sending of the messages are reduced as much as possible, and high-efficiency software and hardware interaction is realized, thereby meeting the design requirements of a high-performance network processor chip.

The invention adopts the following technical scheme:

an elastic shared cache architecture for on-chip message processing comprises three parts, namely a message cache region Management (BM), a Descriptor Management (DM) and an Interrupt Management (IM), which are respectively used for realizing the functions of the message cache region Management, the descriptor Management and the Interrupt Management.

And managing the BM by the message cache region, and dynamically adjusting the number of each storage unit corresponding to different message lengths by configuring a block capacity counter on the premise that the total cache capacity on the chip is constant. Logically, the storage areas with different message lengths are organized into the forms with the same block number and different block sizes so as to meet the requirement of available space. The CPU can be matched with a counting register set to record the capacity of each block, and when software resets the value, the division of the variable storage space can be realized. On a storage space recovery mechanism, the distribution and recovery of storage spaces are realized through address queues with the same number corresponding to the number of the storage spaces with different message lengths, when a message is sent, the value of an address release counter is sequentially added until the value of the address release counter reaches a threshold value set by software, the address recovery of a current block is carried out, and a base address is written into the address recovery queue corresponding to the storage space with the message length. The multiple storage spaces independently implement a reclamation logic and a reclamation queue. The allocation of the storage space is that when the data is sent by the network interface and written into the storage space, the address allocation logic judges to obtain the base address from the address queue of the corresponding message length storage space according to the length type of the currently entered message, and calculates the corresponding storage position of the message according to the allocation counter. Therefore, the storage and the recovery of the elastic cache are realized.

In order to support the characteristic, a descriptor queue comprises the management of 2 types of descriptors, namely a network interface receiving/sending descriptor queue and a message receiving/sending descriptor queue constructed by the CPU, and both queues adopt a ring linked list organization mode; in order to differentiate the priorities of the network interface for receiving and sending the messages, the descriptor queues of the first type are differentiated by high and low priorities, so that the DM realizes the management of 3 receiving/sending queue descriptors in total, and constructs message descriptor rings for a high priority descriptor ring, a low priority descriptor ring and a CPU respectively. Therefore, the rapid processing path for receiving and transmitting the message by the network interface and the simultaneous access of the on-chip multi-core CPU are supported.

The interrupt management IM sets an interrupt threshold register and an interrupt effective bit which can be matched by software, and supports message transmission and processing of two modes under the condition of reducing disturbance to a CPU as much as possible. The first is message transmission processing and sending based on polling, the interruption enabling is turned off, once the DM detects that a message is ready to be transmitted, processed or sent in a cache region, the processing and sending processes are automatically completed without generating interruption; and the second is transmission processing and sending with interruption, which is compatible with the traditional message processing flow with interruption, when the IM detects that the counter reaches the value set by the threshold register, an interruption signal is generated and sent to the on-chip interruption controller for corresponding processing.

Compared with the prior art, the invention has low hardware cost, low message processing delay and selectable and configurable elastic storage space, provides a high-efficiency cache framework for a high-performance network processor chip, meets the design requirement of the network processor chip and has unique advantages:

(1) the elastic cache can dynamically adjust the cache space according to the network flow and the characteristics so as to support the elastic storage of messages with different lengths;

(2) the shared cache supports the rapid processing path of the receiving and sending messages of the network interface and the simultaneous access of the on-chip multi-core CPU, thereby greatly reducing the length of the processing path of the messages on the framework, combining the message flow type access characteristic and efficiently adapting to the network message processing flow.

(3) Meanwhile, the uninterrupted polling processing and the interrupted processing of message transmission, processing and sending are supported, and the efficiency of the on-chip CPU and the efficiency of the internet are improved.

Drawings

Figure 1 is a diagram of a flexible shared cache architecture for on-chip message processing,

figure 2 is a block diagram of a flexible shared cache architecture implementation,

figure 3 is a process flow for delivering a flexible shared cache message over a network interface,

fig. 4 is a processing flow of the CPU issuing a message.

Detailed Description

The invention will be described in further detail below with reference to the drawings and specific examples.

As shown in fig. 1, to construct a flexible shared cache architecture diagram suitable for on-chip message processing according to the present invention.

The Packet buffer Packet RAM may be set up as buffer units of different sizes, e.g., 2KB, 1KB, 512B, 256B, 128B and 64B, and allows the user to dynamically adjust the number of individual memory units, but with a constant total capacity (e.g., 4 MB). Six storage blocks are organized into a form with the same block number but different block sizes so as to meet the requirement of the configurable space, the six storage spaces are divided into 16 blocks as an example, assuming that the size of each block of a 64B storage block is set to be 6, the size of each block is recorded by a CPU configurable counting register set, the size of each block of a 128B storage block is set to be 2, and so on, when software resets the block size value, the division of the variable storage space can be realized.

The flexible shared cache architecture suitable for on-chip message processing mainly comprises three modules, namely a message cache region Management (BM), a Descriptor Management (DM) and an Interrupt Management (IM).

The BM function includes receiving/transmitting a message from a network interface, reading/writing an on-chip message buffer (packet RAM), and managing a buffer id of the message buffer. The DM function comprises writing a descriptor receiving queue according to a received message, reading the descriptor sending queue to send the message, and providing a reading and writing interface for a CPU to access the descriptor queue. The IM function comprises an interrupt counter corresponding to each CPU in the chip, updating the interrupt counter when receiving a message, providing an interface for the CPU to configure a register in the DM, for example, configuring a threshold value of the interrupt counter, generating interrupt when the received message exceeds the threshold value, generating interrupt according to an interrupt signal format, and uploading the interrupt to the interrupt controller in the chip.

Taking the example that six message lengths respectively correspond to six storage spaces, BM sets the storage of six different message lengths as fixed blocks to implement FIFO-based address space allocation and recovery. And the distribution and recovery of the storage space are realized through six FIFO queues, the value of an address release counter is sequentially added until the value of the counter reaches a threshold value set by software, the address recovery of the current block is carried out, and the base address is written into the FIFO queues. The six storage spaces independently realize the recycle logic and the recycle queue. When data is sent by a network interface and written into a storage space, the address allocation logic judges to acquire a base address from a corresponding FIFO according to the length type of a currently entered message, and calculates the corresponding storage position of the message by combining with the allocation counter.

In the DM, the descriptor queue comprises 2 types which are respectively a network interface receiving/sending queue, and the descriptor queue is marked hw in the figure and is divided into a high-priority queue and a low-priority queue according to the priority; CPU constructs message receiving/sending queue, marked with sw in the figure, and the queues all adopt ring linked list organization mode; therefore, the DM realizes the management of 3 types of receiving/sending queue descriptors in total, and constructs message descriptor rings for a high-priority descriptor ring, a low-priority descriptor ring and a CPU respectively. The management of the high/low priority descriptor ring is based on three pointers, namely a write descriptor pointer rx used for indicating the position of a report message header when data is uplink (from a network interface to a Packet RAM), a read descriptor pointer tx used for indicating the position of a Packet header when data is downlink (from the Packet RAM to the network interface), and a software processing pointer p used for indicating the position of the Packet header processed by a CPU (Central processing Unit), wherein if rx is not equal to p, the fact that a message needs to be sent to the CPU for processing is indicated, and if p is not equal to tx, the fact that the message needs to be sent by the network interface is indicated. The management of the CPU constructing the message descriptor ring is based on two pointers, namely a read descriptor pointer tx when data is downloaded (from a Packet RAM to a network interface) and used for indicating the position of the header of a transmitted message of HW and a software processing pointer p, wherein the interval of p → tx is a free message buffer area. When the count of the newly received message exceeds the threshold register, an interrupt is generated and sent to the on-chip CPU.

FIG. 2 is a diagram of a flexible shared cache architecture implementation.

The BM comprises four sub-function modules, namely a message receiving module RECV _ PKT, a message sending module SEND _ PKT, an elastic shared cache address management module BID _ MG and an elastic shared cache read-write module BUF _ IF. The RECV _ PKT receives an incoming message, extracts cpuID information in metadata, requests a BID (free buffer area address) from the BID _ MG, sends a writing message request (carrying the BID) to the BUF _ IF according to the BID, fills the BID in the metadata and sends the metadata to the DM. The message sending module SEND _ PKT receives metadata information from DM, extracts the metadata and SENDs a message reading request (carrying BID) to BUF _ IF, the read message and the metadata are sent out together, and after the message is sent, the BID is sent to a BID _ MG module to recover the BID; if the message is a message sent by the CPU, sending a software message sending mark information (carrying the CPU ID) to the DM module. And the BID _ MG is responsible for initializing BIDs corresponding to the idle addresses in the message buffer area, writing the BIDs into an idle address queue and supporting software to configure the space size of the buffer. And providing idle BIDs for the RECV _ PKT and recycling the BIDs which have sent the messages. The elastic shared cache read-write module BUF _ IF determines the initial address position (software and hardware storage space) according to the configuration and the BID write cache region (stripping message header identification and invalid byte identification) and the BID read cache region (adding message header identification and invalid byte identification).

The DM is composed of four sub-function modules, namely a hardware WRITE descriptor module HW _ WRITE _ DES, a hardware READ descriptor module HW _ READ _ DES, a descriptor ring management module DES _ MG and a CPU READ-WRITE descriptor ring SW _ DES _ IF. HW _ WRITE _ DES receives metadata from BM, extracts CPUID information, updates a descriptor ring in DES _ MG according to CPUID, HW _ READ _ DES polls the descriptor ring in DES _ MG, if a message can be sent, READs the metadata in the descriptor ring and outputs the READ metadata to BM, and updates a hardware descriptor queue, and if a message of sending the message by a CPU is received, updates the software descriptor queue. The DES _ MG module maintains three types of description queues (in a ring organization) for each queue. Each CPU core maintains 3 descriptor queues (forming descriptor rings), which are high priority hardware report messages, low priority hardware report messages, and software generated messages. The SW _ DES _ IF module provides the CPU with the function of reading and writing descriptors.

The IM is composed of three sub-functional modules, namely a counter updating module UPDATE _ CNT, a CPU configuration module CONF _ CNT _ IF and an interrupt request receiving and transmitting module GEN _ INT. UPDATE _ CNT receives the request from DM UPDATE message counter and sends a request to GEN _ INT to generate an interrupt if the counter exceeds a threshold. CONF _ CNT _ IF receives a configuration request from the CPU, and configures the corresponding module. GEN _ INT receives a request for UPDATE _ CNT to generate an interrupt, and generates the interrupt according to the format of the interrupt message.

As shown in fig. 3, a processing flow of delivering a flexible shared cache message on a network interface is provided.

1) The RECV _ PKT module receives a message (carrying metadata) from a network interface.

2) Reads the free Buffer ID (BID), and then writes the message into RAM via the high speed data bus (128 b) according to the BID.

3) And filling BID in Metadata, and sending the Metadata to a HW _ WRITE _ DES module.

4) The HW _ WRITE _ DES module WRITEs the descriptor into the descriptor queue.

5) And updating the message counter of the corresponding CPU core, and if the message counter exceeds a threshold value, generating interruption.

As shown in fig. 4, the CPU issues a message processing flow.

1) After the CPU processes the message (or generates the message), the descriptor queue is updated.

2) And the HW _ READ _ DES polling description queue READs the descriptor information when finding that a message needs to be sent.

3) And after the description is READ by HW _ READ _ DES, sending the description to a SEND _ PKT module.

4) And the SEND _ PKT reads a message from the RAM according to the BID in the metadata and recovers the BID (it is noted that the message sent by the network interface and the message actively generated by the CPU are separately divided into a storage space, and the BID _ MG module only maintains the cache region address of the elastic shared cache sent by the network interface).

5) And the SEND _ PKT transmits the read message and the metadata to a network interface.

In summary, the elastic shared cache architecture for processing the messages in the chip of the present invention adapts to the network message flow characteristics and the processing flow, implements the elastic shared cache architecture of the messages, supports the elastic storage of messages with different lengths, supports the rapid processing path for receiving and sending the messages by the network interface and the simultaneous access of the on-chip multi-core CPU, and simultaneously supports the polling-based message transmission processing and the interrupted message transmission processing, so that the storage and sending of the messages are reduced as much as possible, and the efficient software and hardware interaction is implemented, thereby satisfying the design requirements of the high-performance network processor chip.

10页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种数据存储方法、SoC芯片及计算机设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类