Method for adapting a constant bit rate client signal into a path layer of a telecommunication signal

文档序号:108548 发布日期:2021-10-15 浏览:14次 中文

阅读说明:本技术 用于将恒定比特率客户端信号适配到电信信号的路径层中的方法 (Method for adapting a constant bit rate client signal into a path layer of a telecommunication signal ) 是由 S·S·戈尔什 W·莫克 于 2019-07-23 设计创作,主要内容包括:本发明公开了一种用于在电信信号通信链路中将恒定比特率(CBR)客户端信号速率适配到信号流中的方法,该方法包括:在源节点处将控制块标头和有序集块标志符编码到控制块中;将数据块标头和信号块中待编码的数据块的计数编码到多个路径开销数据块中;将数据块标头编码到多个信号块中的每个中,来自CBR信号的数据块等于计数和填充块;将块汇编成路径信号帧;在路径信号帧结束之后附加空闲字符块的集合以匹配第一链路段的链路比特率;以及以链路比特率将路径信号帧和空闲字符块传输到第一链路段中。(The present invention discloses a method for rate adapting a Constant Bit Rate (CBR) client signal into a signal stream in a telecommunication signal communication link, the method comprising: encoding a control block header and an ordered set block designator into a control block at a source node; encoding a data block header and a count of data blocks to be encoded in a signal block into a plurality of path overhead data blocks; encoding a data block header into each of a plurality of signal blocks, the data blocks from the CBR signal equal to the count and pad blocks; assembling the blocks into a path signal frame; appending a set of idle character blocks to match the link bit rate of the first link segment after the path signal frame ends; and transmitting the path signal frame and the idle character block into the first link section at the link bit rate.)

1. A method for rate adapting a constant bit rate client signal into a signal stream in a 64B/66B block telecommunications signal communication link, said method comprising:

encoding an ordered set block designator into a control 64B/66B block at a source node;

encoding a count of data blocks to be encoded in a signal 64B/66B block into a plurality of path overhead 64B/66B data blocks at the source node;

encoding, at the source node, a total number of data blocks from the constant bit rate client signal into each of a plurality of signal 64B/66B blocks, the total number equal to the count sent in the path overhead 64B/66B data blocks and a number of 64B/66B pad blocks;

assembling the plurality of path overhead data 64B/66B blocks, the plurality of signal 64B/66B blocks, and the control 64B/66B block into a path signal frame, the control 64B/66B block occupying a last position in the path signal frame; and

the number of free character 64B/66B blocks is appended at a position along the control 64B/66B block after the path signal frame ends to match the link bit rate of the first link segment.

2. The method of claim 1, further comprising transmitting the additional number of path signal frames and idle character 64B/66B blocks from the source node into the signal stream at the link bitrate at the first link segment.

3. The method of claim 1, wherein:

the count of data blocks to be encoded in a path overhead 64B/66B block is variable; and is

The additional number of free blocks is fixed.

4. The method of claim 3, wherein the number of 64B/66B padding blocks is distributed among the 64B/66B data blocks.

5. The method of claim 1, wherein:

the count of data blocks to be encoded in a path overhead 64B/66B block is fixed; and is

The additional number of free blocks is variable.

6. The method of claim 5, wherein the number of 64B/66B pad blocks is distributed among the 64B/66B data blocks.

7. The method of claim 2, further comprising:

receiving the additional number of the path signal frames with free character 64B/66B blocks from the first link segment at an intermediate node;

adapting the link bit rate to a bit rate internal to the intermediate node by appending additional free character 64B/66B blocks to a set of free character 64B/66B blocks when the link bit rate is slower than the bit rate in the intermediate node, and by deleting free character 64B/66B blocks from the set of free character 64B/66B blocks when the link rate is faster than the bit rate in the intermediate node, to form a modified set of free character 64B/66B blocks; and

transmitting the modified set of path signal frames and idle character 64B/66B blocks from the intermediate node into the signal stream at the link bitrate at a second link segment.

8. The method of claim 7, further comprising:

receiving the modified set of path signal frames and idle characters 64B/66B blocks from the second link segment at the link bit rate at a sink node;

extracting, in the sink node, the count of encoded data blocks from the plurality of path cost 64B/66B data blocks;

extracting the encoded data block from the modified signal 64B/66B block in the sink node;

regenerating the constant bit rate client signal from the extracted encoded data block; and

determining, in the sink node, a bit rate of the constant bit rate client signal from a sink node reference clock, a count of extracted encoded data blocks, and the number of free character 64B/66B blocks in the modified set of free character 64B/66B blocks; and

adjusting a rate of a constant bit rate client signal clock for transmitting the constant bit rate client signal at the bit rate of the constant bit rate client signal.

9. The method of claim 1, further comprising:

encoding a control block sync header into a control 64B/66B block at the source node;

encoding a data block sync header into each of a plurality of path overhead 64B/66B data blocks at the source node; and

a data block sync header is encoded into each of a plurality of signal 64B/66B blocks at the source node.

10. A source node for rate adapting a constant bit rate client signal into a signal stream in a 64B/66B block telecommunications signal communication link, said source node comprising:

a GMP engine synchronized with an external source of GMP windowed frame pulses;

a FIFO buffer coupled to receive a 64B/66B encoded client data stream;

clock rate measurement circuitry coupled to measure a clock rate of the 64B/66B encoded client data stream and provide the clock rate to the GMP engine;

a source of 64B/66B POH blocks;

a source of 64B/66B pad blocks;

a source of 64B/66B free blocks;

a multiplexer having data inputs coupled to the FIFO buffer, the source of the 64B/66B POH blocks, the source of the 64B/66B filled blocks, and the source of the 64B/66B free blocks, the multiplexer having a control input and a data output; and

a multiplexer controller having a multiplexer control output coupled to the control input of the multiplexer, the multiplexer controller is responsive to GMP windowed frame pulses received from the external source of GMP windowed frame pulses and to frame data endian values extracted from the GMP engine, to selectively pass data from one of the FIFO buffer, the source of the 64B/66B POH blocks, the source of the 64B/66B filled blocks, and the source of the 64B/66B free blocks to the data output, the multiplexer controller is configured to change the number of 64B/66B pad blocks passed to the output of the multiplexer, so that the GMP windowed frame is filled with a combination of 64B/66B data blocks and 64B/66B padding blocks.

11. A source node for rate adapting a constant bit rate client signal into a signal stream in a 64B/66B block telecommunications signal communication link, said source node comprising:

a GMP engine comprising a free-running source of GMP windowed frame pulses; a FIFO buffer coupled to receive a 64B/66B encoded client data stream;

clock rate measurement circuitry coupled to measure a clock rate of the 64B/66B encoded client data stream and provide the clock rate to the GMP engine;

a source of 64B/66B POH blocks;

a source of 64B/66B pad blocks;

a source of 64B/66B free blocks;

a multiplexer having data inputs coupled to the FIFO buffer, the source of the 64B/66B POH blocks, the source of the 64B/66B filled blocks, and the source of the 64B/66B free blocks, the multiplexer having a control input and a data output; and

a multiplexer controller having a multiplexer control output coupled to the control input of the multiplexer, the multiplexer controller is responsive to GMP windowed frame pulses received from the free-running source of GMP windowed frame pulses and to frame data endian values extracted from the GMP engine, to selectively pass data from one of the FIFO buffer, the source of the 64B/66B POH blocks, the source of the 64B/66B filled blocks, and the source of the 64B/66B free blocks to the data output, the multiplexer controller is configured to pass a fixed number of 64B/66B pad blocks to the output of the multiplexer to fill a GMP window frame with a combination of 64B/66B data blocks, 64B/66B pad blocks, and 64B/66B idle blocks.

12. An aggregation node for extracting constant bit rate client signals from a 64B/66B client signal stream over a 64B/66B block telecommunications signal communication link, the aggregation node comprising:

a DSP engine;

a clock rate measurement circuit that measures clock rate data from the 64B/66B client signal stream and is coupled to the DSP engine;

a GMP overhead and count idle extraction circuit coupled to read data from the 64B/66B client signal flow and to extract GMP overhead and count idle from the 64B/66B client signal flow;

client payload extraction circuitry coupled to read data from the 64B/66B client signal stream and to extract a constant bit rate client signal from the 64B/66B client signal stream; and

a FIFO buffer coupled to the client payload extraction circuit to receive the extracted constant bit rate client signal extracted from the 64B/66B client signal stream and output the constant bit rate client signal.

Background

Constant Bit Rate (CBR) signals (e.g., digital video signals) deliver bits at a known, fixed rate. It has become common to group successive data bits into 64 bit blocks and then pack these bit blocks into 66 bit line code blocks (64B/66B coding). The resulting block-coded stream then has a fixed rate of "W" bits/second (with some variance based on the accuracy of the CBR signal clock source).

The newly launched MTN project in ITU-T SG15 initially assumed that all client signals were ethernet and lacked a straightforward method of supporting Constant Bit Rate (CBR) clients. There are two classes of prior solutions for transporting CBR clients along a path from a source node to an aggregation node. One class creates a CBR path signal that contains the client and some additional path overhead. It then uses the overhead in the server signal to accommodate the difference between the path signal rate and the server payload channel rate. While there are various methods within this category, ITU-T Generic Mapping Procedure (GMP) is a common solution for mapping CBR digital client signals of arbitrary rate into the payload of a telecommunication network server layer channel. The source sends a count value (Cm) using the GMP overhead in each GMP window, which tells the sink node how many payload data blocks it will send in the next window. The source node inserts filler blocks using a Cm-based modulo arithmetic algorithm to fill in any channel bandwidth not needed by the client signal. The aggregator recovers the data using the same algorithm. However, the server channel used for the MTN project in ITU-T SG15 does not provide GMP overhead. Since GMP overhead is relatively small and regularly spaced, this approach generally greatly simplifies the converged receiver process for deriving client signal rate when extracting signals. A disadvantage of this approach is that it requires server segment overhead that must be handled at each node along the path.

Another category of solutions operates in the packet domain. The fixed-size portion of the CBR client signal stream is periodically encapsulated into layer 2 or layer 3 packets (e.g., ethernet frames) that are sent from the source to the sink as path signals. The aggregator then extracts client data from the packets to reconstruct the client signals. Differences in clock domains along the path are accommodated by inserting or removing inter-packet idle blocks. This approach is prevalent in networks that carry packet information primarily with relatively small CBR traffic.

One drawback of this solution is the large amount of overhead bandwidth required for packet encapsulation. Another disadvantage is that packet processing along the path, including the standard ethernet idle insertion/removal procedure (IMP), generates jitter due to irregular inter-packet arrival times at the aggregator. This adds significant complexity to the process of deriving the client signal rate at the receiver, since the average packet arrival time can be modified by intervening equipment. In addition, the use of packets increases the delay at the source and sink nodes and requires much larger buffers at the sink node.

GMP requires a consistent fixed number of bits per GMP window as defined by ITU-T (g.709 optical transport network). The server channel is point-to-point between nodes such that the GMP terminates at the ingress to the intermediate node and regenerates at the egress port of the node. Since the server channel of the MTN lacks GMP overhead, it is desirable to move the GMP function into the "path" overhead (POH) added to the client signal flow. The POH passes through the intermediate node without modification. Thus, placing GMPs in POHs allows legacy intermediate nodes to be used without upgrading, as this avoids the need to add GMP processing to them. A problem with using GMP in POH is that the intermediate nodes have a different clock domain than the source node, which makes it impossible to maintain a constant fixed number of bits for each GMP window. The GMP only adjusts the amount of payload information sent per window, but the time period of the window is set by the source node based on its reference clock (REFCLK).

Disclosure of Invention

The present invention overcomes the intermediate clock domain problem by adding a mechanism that allows for small variable spacing between GMP windows.

The present invention allows the use of GMP in Path Overhead (POH) to adapt the path flow to the server channel of the source so that it can traverse the intermediate nodes and provide the aggregation node with frequency (rate) information that it can use to recover the client signal.

The client stream is composed of Ethernet compliant 64B/66B blocks. The POH is inserted into the stream as a special Ordered Set (OS) and identifiable 64B/66B data blocks to create a path signal stream. Unlike G.709 (which relies on framing patterns and fixed pitches to find GMP OH), the present invention uses 64B/66B OS blocks to identify the boundaries of the GMP window, while other POH/GMP blocks are located in fixed positions within the GMP window.

The present invention uses the following facts: by definition, GMP Cm is a count of the number of 64B/66B data (i.e., non-padded) blocks that the source node will transmit in the next GMP window. Thus, since Cm allows the sink to determine when the source has finished sending all the data for that window (i.e., non-padding data), the sink node can accommodate extending the GMP window by any number of blocks. This insight allows using a method of adding small blocks of 64B/66B idle blocks to each GMP window, so that the intermediate node can increase or decrease the number of idle blocks for rate adaptation according to the standard ethernet idle insertion/removal procedure (IMP). The required idle block length inserted by the source will be a function of the selected frame length so that a maximum of 200ppm clock difference between nodes can be accommodated.

The sink node recovers the client signal rate by a combination of the received GMP overhead and the average number of idles it receives. GMP further assists the receiver PLL by a smoother distribution of its stuffing/padding blocks.

The present invention uses existing nodes that perform this padding adjustment by adding or removing ethernet free blocks (i.e., standard IMP procedures).

The sink node that extracts the CBR client signal must determine the CBR signal rate in order to recreate the output signal at the exactly correct rate. According to the invention, the sink node uses a combination of the rate it recovers/observes from the segment signals entering the node, the amount of IMP padding between the segment signals and the path signals, and the amount of GMP padding between the client signals and the path signals to recreate the output signal at the just correct rate.

According to one aspect of the invention, a method for rate adapting a constant bit rate client signal into a signal stream in a 64B/66B block telecommunications signal communication link includes encoding an ordered set block designator into a control 64B/66B block at a source node; encoding a count of data blocks to be encoded in the signal 64B/66B block into a plurality of path overhead 64B/66B data blocks at the source node; encoding, at the source node, a total number of data blocks from the constant bit rate client signal into each of the plurality of signal 64B/66B blocks, the total number equal to the count sent in the path overhead 64B/66B data blocks and the number of 64B/66B filler blocks; assembling a plurality of path overhead 64B/66B data blocks, a plurality of signal 64B/66B blocks, and a control 64B/66B block into a path signal frame, the control 64B/66B block occupying a last position in the path signal frame; appending a set of numbers of 64B/66B free blocks including a plurality of 64B/66B free blocks at a location past the control 64B/66B block after the end of the path signal frame to match the link bit rate of the first link segment; and transmitting the path signal frame and the additional number of 64B/66B idle blocks from the source node into the signal stream at the link bit rate at the first link section.

According to an aspect of the invention, the count of data blocks to be encoded in the path cost 64B/66B block is variable, while the additional number of free blocks is fixed.

According to an aspect of the invention, the count of data blocks to be encoded in the path cost 64B/66B block is fixed, while the additional number of free blocks is variable.

According to an aspect of the invention, the method includes receiving, at an intermediate node, an additional number of path signal frames from the first link segment having idle characters 64B/66B blocks; adapting the link bit rate to a bit rate internal to the intermediate node by conditionally appending additional blocks of idle characters 64B/66B to the set of blocks of idle characters 64B/66B when the link bit rate is slower than the bit rate in the intermediate node, and by conditionally deleting blocks of idle characters 64B/66B from the set of blocks of idle characters 64B/66B when the link bit rate is faster than the bit rate in the intermediate node, to form a modified set of blocks of idle characters 64B/66B; and transmitting the modified set of path signal frames and idle character 64B/66B blocks from the intermediate node into the signal stream at the link bit rate at the second link section.

According to one aspect of the invention, the method includes receiving, at the sink node, a modified set of path signal frames and idle characters 64B/66B blocks from the second link segment at the link bit rate; extracting, in the sink node, a count of encoded data blocks from the plurality of path cost 64B/66B data blocks; extracting in the sink node the coded data block from the modified signal 64B/66B block; regenerating a constant bit rate client signal from the extracted encoded data block; and determining in the sink node both the bit rate of the constant bit rate client signal from the sink node reference clock and the count of extracted encoded data blocks and the number of free character 64B/66B blocks in the modified set of free character 64B/66B blocks; and adjusting a rate of the constant bit rate client signal clock for transmitting the constant bit rate client signal at the bit rate of the constant bit rate client signal.

Drawings

The invention will be explained in more detail below with reference to embodiments and the accompanying drawings, in which:

fig. 1 is a schematic diagram showing a basic network diagram illustrating the use of GMP in POH to adapt a path flow to an origin server channel.

Fig. 2 is a schematic diagram illustrating a first method of a source node deriving a path signal rate in accordance with an aspect of the present invention;

FIG. 3 is a diagram illustrating a second method of a source node deriving a path signal rate in accordance with an aspect of the present invention;

FIGS. 4A, 4B, and 4C are schematic diagrams showing the structure of a 64B/66B block;

FIG. 5A is a schematic diagram showing a representative path signal frame with client data arranged in a first manner within the frame;

FIG. 5B is a schematic diagram showing a representative path signal frame with client data arranged in a second manner within the frame;

FIG. 5C is a schematic diagram showing a representative path signal frame with client data arranged in a third manner within the frame;

FIG. 6A is a block diagram of an example of a source node configured in accordance with an aspect of the present invention;

FIG. 6B is a block diagram of another example of a source node configured in accordance with an aspect of the invention; and is

Fig. 7 is a block diagram of an example of an aggregation node configured in accordance with an aspect of the present invention.

Detailed Description

Those of ordinary skill in the art will realize that the following description is illustrative only and is not intended to be in any way limiting. Other embodiments will readily occur to those skilled in the art.

Referring initially to fig. 1, a schematic diagram illustrates a typical data flow in a network 10 according to the present invention from a source node 12 through an intermediate node 14 and ultimately to a sink or target node 16.

There are two nested channels for carrying CBR signals through the network. The first channel extends end-to-end, i.e., from where the CBR signal enters the network in the source node 12 and through one or more intermediate nodes 14 to where the CBR signal exits the network in the sink or destination node 16. This channel is referred to herein as a "path" layer channel and is indicated in parenthesis in fig. 1 by reference numeral 18.

The CBR signal plus overhead information inserted by the present invention is carried hop-by-hop through a network consisting of pieces of switching equipment (nodes) connected to each other by some physical media channel. This physical medium channel (the second of the two concatenated channels) is referred to herein as the "segment" layer channel. The first segment level channel connects the source node 12 to the intermediate node 14 and is indicated in parenthesis in fig. 1 with reference numeral 20. The second segment layer channel connects the intermediate node 14 to the sink or target node 16 and is indicated in parenthesis in fig. 1 with reference numeral 22.

The set of 64B/66B encoded CBR client signals 24 is delivered to the source node 12, which inserts the set of 64B/66B encoded CBR client signals 24 into the link 26 towards the intermediate node 14 after adding the appropriate performance monitoring overhead. The segment layer 20 covers all information carried on the link 26. For the purposes of this disclosure, it is assumed that the incoming client signal 24 has been adapted to a 64B/66B format in such a way that all 64B/66B blocks are data blocks.

The intermediate nodes 14 are typically connected to a plurality of source nodes 12 and a plurality of sink nodes 14. The client signal is switched by the intermediate node 14 onto a set of egress links (one of which is identified by reference numeral 30) connected to a plurality of aggregation nodes. The particular aggregation node 16 shown in fig. 1 is designated as the target aggregation node for the 64B/66B encoded CBR client signal 24 and extracts the performance monitoring overhead and recovers the 64B/66B encoded CBR client signal 24.

Managing this traffic from multiple source nodes to multiple sink nodes is typically handled using the FlexE calendar slot technique known in the art. This layer of data transmission is not shown in fig. 1 to avoid overcomplicating the disclosure and obscuring the invention. Those of ordinary skill in the art will appreciate that the clock timing concepts for data transmission disclosed herein reflect the calendar slot timing employed in FlexE technology, which is essential to an understanding of the present invention.

The segment layer is based on FlexE, where the time-division multiplexed channel is called a calendar slot. The term comes from the fact that each calendar slot occurs multiple times per FlexE frame at fixed intervals between occurrences of the same calendar slot.

Segment layers 20 and 22 are carried by links 26 between the source node 12 and the intermediate node 14 and links 30 between the intermediate node 14 and the sink node 16, respectively. Those of ordinary skill in the art will appreciate that performance monitor overhead (not shown) for the link 26, which is not relevant to the present invention, is inserted and monitored in the intermediate node 14 before the data leaves the source node 12. Similarly, performance monitor overhead for the link 30, which is not relevant to the present invention (not shown), is inserted and monitored in the sink node 16 before the data leaves the intermediate node 14. The segment layer 20 or 22 originates at the transmitting end of the link 26 or 30 and terminates at the receiving end in an intermediate node (link 26) or a sink node (link 30), respectively.

Each 64B/66B encoded CBR client signal 24 is also associated with a path layer 18 that extends from the source node 12 to the sink node 16. The path layer 18 spans the sink node 16 from the source node 12. The intermediate node 14 treats the CBR client signal 24 and associated path layer 18 overhead information as a single unit. Together they inseparably switch from link 26 (also referred to as ingress link 26) to link 30 (also referred to as egress link 30).

At the ingress of the source node 12, the client data received in the 64B/66B encoded CBR client signal 24 is ready to be forwarded to the intermediate node 14 within the dotted line designated by reference numeral 32. The insertion POH at reference numeral 34 inserts path-level performance monitor overhead information for the client signal 24. The path level performance monitor overhead includes several components, including for purposes of the present invention, a quantity Cm, which is an identification of the number of blocks of client data to be sent in the next frame. In reference numeral 36, rate adaptation is inserted via GMP and the number of inserted free blocks is identified. A free block is inserted to adapt the client signal 24 to the payload capacity of the calendar slot of the FlexE multiplex protocol and to the clock rate of the link 26 connecting the source node 12 to the intermediate node 14. The clock rate of the link 26 is known to the source node 12 transmitting into the link 26. The client signal 24 enhanced by the POH and inserted idle blocks is transmitted by the source node 12 to the intermediate node 14 over a link 26.

As will be shown with reference to fig. 4A, 4B and 4C, the control block header and ordered set block designator are encoded into the control 64B/66B block at the source node 12 as part of the insertion POH block 34. The data block header and a count of data blocks to be encoded in the signal 64B/66B block are encoded into a plurality of path overhead 64B/66B data blocks. The data block header, the total number of data blocks from the CBR client signal 24, equal to the count sent in the path overhead 64B/66B data block and the number of 64B/66B filler blocks, is encoded into each of the plurality of signals 64B/66B blocks. The data blocks and path overhead blocks are preferably distributed, rather than clustered, for greater immunity to error bursts, as they are later used to help reconstruct the client signal clock of the client CBR signal at the aggregation node.

A plurality of path overhead data 64B/66B blocks, a plurality of signal 64B/66B blocks, and a control 64B/66B block are assembled into a path signal frame. The control 64B/66B block occupies the last position in the path signal frame. A set of free character 64B/66B blocks having a plurality of free character 64B/66B blocks is appended at locations past the control 64B/66B blocks after the end of the path signal frame, the set selected to produce a compilation of link bit rates matching the first link segment. Additional quantities of path signal frames and idle characters 64B/66B blocks are transmitted from the source node into the signal stream at the corresponding link bit rate at the first link segment 26.

In the intermediate node 14, the encoded client signal transmitted by the source node 12 is adapted to the clock rate of the intermediate node 14 in reference numeral 38, which inserts or deletes free character 64B/66B blocks from the data stream as needed to match the data stream rate to the clock rate of the intermediate node 14. Receiving at the intermediate node 14 from the first link segment 26 a path signal frame having an additional number of free character 64B/66B blocks, adapting the link bit rate to the bit rate internal to the intermediate node 14 by appending additional free character 64B/66B blocks to the set of free character 64B/66B blocks when the link bit rate is slower than the bit rate in the intermediate node, and by deleting free character 64B/66B blocks from the set of free character 64B/66B blocks when the link bit rate is faster than the bit rate in the intermediate node to form a modified set of free character 64B/66B blocks. Following the distribution by the calendar slot switch 40, which will be discussed further below, the modified set of free character 64B/66B blocks is further modified in reference numeral 44 to adapt the clock rate of the intermediate node 14 to the rate of the link 30, and the further modified set of path signal frames and free character 64B/66B blocks is transmitted from the intermediate node 14 into the signal stream at the second link segment 30 at the respective link bit rate. In particular, the link bit rate is adapted from the internal bit rate of the intermediate node 14 to the link bit rate of the link 30 by appending additional idle character 64B/66B blocks to the set of idle character 64B/66B blocks when the bit rate in the intermediate node 14 is slower than the bit rate of the link 30, and by deleting the idle character 64B/66B blocks from the set of idle character 64B/66B blocks when the bit rate in the intermediate node 14 is faster than the bit rate of the link 30, to form a further modified set of idle character 64B/66B blocks.

The intermediate node 14 includes a calendar slot switch 40 for distributing client data in calendar slots according to an expected aggregation node according to a FlexE scheme known in the art. Link 42 is shown sending data to another aggregation node (not shown).

Calendar slot switch 40 is a switching fabric that connects path layer signals carried through a set of calendar slots on an input port to a set of calendar slots on an output port. Conceptually similar to any structure used for switching/cross-connecting constant rate signals. The main difference with other architectures is that calendar slot switch 40 must use I/D rate adapters 38 and 44 for rate adaptation. The I/D rate adapters 38 and 44 insert or remove free blocks from between the path signal frames as shown in fig. 5A, 5B and 5C so that their resulting data rates match the actual calendar slot rate of the switch fabric in the calendar slot switch 40 and then match the actual calendar slot rate of the link 30 at the output port of the intermediate node 14.

A further modified set of path signal frames and idle character 64B/66B blocks are received at the sink node 16 from the second link segment 30 at the corresponding link bit rate. In the sink node 16, a count of encoded data blocks is extracted from the plurality of path cost 64B/66B data blocks. The coded data block is extracted from the further modified signal 64B/66B block. A constant bit rate client signal is regenerated from the extracted encoded data block. The bit rate of the constant bit rate client signal is determined by the recovered bit rate of the incoming link 30, the extraction count (Cm) of the encoded data block, and the number of free character 64B/66B blocks in the further modified set of free character 64B/66B blocks, and the rate of the constant bit rate client signal clock is adjusted for transmitting the constant bit rate client signal at the bit rate of the constant bit rate client signal 24 provided to the source node 12.

Path-level performance monitor overhead information for the CBR client signal 24 is extracted from the client data signal in reference numeral 46 in the aggregation node 16. The information includes a number Cm that identifies how many data blocks are to be recovered from the next frame. The number Cm of data blocks to be recovered from the current frame has been extracted from the previous frame by reference numeral 46.

At reference numeral 48, the GMP overhead (Cm) is recovered, the number of received free blocks is counted, and the GMP filler blocks and all free blocks are discarded. The output of block 48 is the resulting client 64B/66B CBR encoded signal as shown at reference numeral 50.

As will be appreciated by those of ordinary skill in the art, the intermediate node 14 passes the client 64B/66B encoded signal through and only removes or adds free blocks as needed to adapt the rate of the incoming signal to its own clock rate and to the clock rate of the link 30 between it and the sink node 16. The intermediate node 14 need not consume processing power to unpack and repackage the client 64B/66B encoded signals.

POH insertion and rate adaptation performed at reference numerals 34 and 36 adapt the rate of the path overhead enhanced 64B/66B encoded client signal to the payload capacity of the FlexE calendar slot associated with the selected path layer (not shown) and to the clock rate of the link 26. According to a first aspect of the invention as shown in fig. 2 (which shows a first embodiment of reference numeral 32 in more detail), the number Cm of data blocks is variable, and a variable number of 64B/66B filler blocks are added to the frame at reference numeral 52 to achieve a nominal rate stream having a bit rate that is lower by a fixed amount than the payload capacity of the FlexE calendar slot of the selected path. The remainder of the payload capacity is filled at reference numeral 54 by inserting a fixed number of 64B/66B idle blocks after the frame. In other words, the source node 12 inserts a variable number of 64B/66B padding blocks into the client data within the frame such that when a fixed/constant number of free blocks is added at the end of the frame, the time length of the resulting signal exactly matches the rate of the FlexE calendar slot that will carry it. In accordance with this aspect of the invention, the source node 12 derives the clock rate of the path layer signal 18 from the FlexE channel ("CalSlot") rate and uses dynamic GMP for client mapping and source rate adaptation. The source node 12 transmits a constant minimum number of idle blocks per frame. The IMP is not performed at the source node 12. The source node 12GMP includes word fraction information to assist the receiver Phase Locked Loop (PLL). The sink node 16 determines the original client signal rate by examining the combination of the dynamic GMP information and the average number of idles it receives relative to the known number of idles inserted by the source node. The difference between the known number of idles inserted by the source node and the average number of idles received is that one or more intermediate nodes have modified the number of idles.

According to another aspect of the present invention as illustrated with reference to fig. 3 (which shows a second embodiment of reference numeral 32 in more detail), a variable number of 64B/66B filler blocks are inserted to construct a padded enhancement stream having a bit rate different from that of the 64B/66B encoded client signal 24. As in the embodiment of fig. 2, POH insertion is performed at reference numeral 34. A fixed number of 64B/66B data blocks (Cm) and 64B/66B padding blocks are added to the frame at reference numeral 56 to achieve a stream with a bit rate that is a variable amount lower than the payload capacity of the FlexE calendar slot of the selected path 18. The remainder of the payload capacity is filled by inserting a variable number of 64B/66B free blocks in each frame, as shown at reference numeral 58, to fill the allocated calendar slots into the FlexE-type calendar slots. In accordance with this aspect of the invention, the source node 12 derives the path signal rate from the client rate, maps using static GMP, and performs source rate adaptation using IMP. A predetermined constant GMP Cm is used in order to create a path signal that is slightly slower than the nominal server channel rate. The standard FlexE "shim" layer then uses IMP to add idle blocks between frames to fill any remaining bandwidth in the link 26. The sink node 16 will determine the original client rate based only on the average number of 64B/66B free blocks received. In this embodiment, GMP is primarily used to provide smooth delivery of payload blocks within the path frame payload at a fixed rate per path frame.

While standard link protocols specify that the bit rate of a segment link 26 or 30 is nominally the same between each pair of connected nodes, differences in the accuracy of the clock source at each node result in small frequency variations in the rate of each node-to-node link. Thus, each node needs to make some adjustment to the number of 64B/66B idle blocks so that it adds the appropriate number of 64B/66B idle blocks between path signal frames to match the segment-level channel rate of the next hop.

The idle per client I/D rate adaptation block 38 of the intermediate node 14 inserts or deletes idle blocks on a per client basis. The bit rate of the ingress stream on the link 26 is adjusted to match the clock in the intermediate node 14 to control the payload capacity of the calendar slot in the FlexE scheme set by the calendar slot switching block 40. The calendar slot swapping block 40 switches client signals delivered by one set of calendar slots over the ingress link 26 to the corresponding FlexE calendar slot of the target set of egress links 30. Typically, the capacity of the calendar slots in the switch 40 matches the capacity of the egress link 30. In this case, the rate adaptation block 44 may be omitted. In the event that the calendar slot rates of the calendar slot switch 40 and the egress link 30 are not the same, the rate adaptation block 44 inserts or deletes free blocks in the client stream to match the rate of the resulting stream to the rate of the payload capacity of the calendar slot at the egress link 30.

The end-to-end path layer 18 carrying the CBR client is sent by the source node 12 at a bit rate of "X" bits/second. The bit rate of the segment layer channel 20 or 22 carrying the path layer channels between nodes is "Z" bits/second, where the rate of Z is slightly higher than the rate of X. The present invention adds identifiable filler blocks to the path flow to accommodate the difference between the X-rate and the Z-rate. According to one aspect of the invention, special ethernet control blocks (ethernet idle blocks or ordered collection blocks) are used for the padding blocks. According to the second aspect of the invention, the identified padding blocks are GMP padding blocks.

Fig. 4A, 4B, and 4C are schematic diagrams illustrating the structure of a 64B/66B block. Fig. 4A shows a control block 60. The block contains 64 information bits, preceded by a 2-bit header identified at reference numeral 62, sometimes referred to as a "sync header". If the 64B/66B block includes control information, the header is a control block header and is set to 10 as shown in FIG. 4A, and the 8-bit field at byte 1 after the header identifies the control block type. The block identified by reference numeral 60 is a control block. For the purposes of the present invention, the only control block type of interest is the ordered set block specified by block type 0x 4B. Ordered Set (OS)64B/66B blocks are shown in FIG. 4A.

FIG. 4B illustrates the organization of the 64B/66B control block 60 and the three associated 64B/66B POH data blocks identified at reference numeral 64. Byte positions 7 and 8 in the three POH data blocks 64 are used to transmit the data count Cm along with the amount Cm of error correction data. The Cm data and error correction data are distributed over the three 64B/66B POH data blocks 64 so that in the event of a single corruption event occurring during data transmission, the error correction data can still be used to recover the quantity Cm.

Referring now to fig. 4C, if the 64B/66B block contains only data (i.e., it is a data character), the header is the data block header and is set to a value of 01, and the 64 bits following the header contain 8 bytes of data (e.g., bytes from an ethernet packet). The upper 64B/66B data block, shown at 66 in FIG. 4C, contains only client data represented by Val 1 through Val 8 in byte positions 1 through 8. The data block may also contain an additional POH field including GMP overhead, as shown in the lower 64B/66B POH block of fig. 4C, as shown at reference numeral 62 (also denoted as 64B/66B block 64 in fig. 4B).

Fig. 5A, 5B, and 5C illustrate three different exemplary, non-limiting arrangements of frames. Fig. 5A and 5B are arrangements in which N data blocks (identified at reference numeral 68 as "payloads") are distributed into segments. FIG. 5A shows that the N data blocks are divided into four segments, each segment including N/4 data blocks. Each data block segment is followed by a 64B/66B POH block 70. The 64B/66BPOH block 72 at the end of the frame is a 64B/66B control block. FIG. 5B shows that the N data blocks are divided into three segments, each segment including N/3 data blocks. Each data block segment is preceded by a 64B/66B POH block. The 64B/66B POH block at the end of the frame is a control block.

FIG. 5C shows an arrangement in which N data blocks (identified as "payloads 68") are grouped together, preceded by a set of three 64B/66BPOH blocks. The 64B/66B POH block at the end of the frame is always a control block.

The frame in each of fig. 5A, 5B and 5C is followed by a plurality of 64B/66B idle blocks (identified at reference numeral 74) that are used to adapt the frame rate to slight variations in the bit rate of the source node 12, intermediate node 14, sink node 16 and the links 26, 30 connecting them, as previously described.

The control POH block 72 is positioned at the end of each of the frames depicted in fig. 5A, 5B, and 5C. This is done because intermediate nodes in the telecommunication system have been configured to insert idle blocks in the data stream. The intermediate node is configured to always insert any necessary free blocks 74 immediately following the control block. If the control block 72 of the present invention 72 is located in any of the other POH block locations, there is a risk that the intermediate node may insert a free block 74 at a location immediately after the control block. This would completely destroy the ability of the sink node to correctly find the path signal frame.

Referring now to fig. 6A, a block diagram illustrates an example of a source node 80 configured in accordance with an aspect of the present invention. The source node 80 implements the aspects of the invention illustrated in fig. 2. In fig. 2, the number of GMP padding blocks is changed to fill a GMP frame having a fixed period. If the 64B/66B client encoded signal is slow, more padding blocks are added. If the client is fast, fewer padding blocks are added. An external frame pulse on line 86 generated by a reference clock within the source node is applied to ensure that the GMP frame has a fixed period. Since GMP frames have a fixed period and a fixed number of blocks per frame and FlexE calendar slots have a fixed capacity per time unit, the difference between them can be filled with a fixed number of 64B/66B free blocks.

64B/66B client data is received on line 82. The path layer frame boundaries are generated by a GMP engine 84 that is time aligned by an external GMP windowed frame pulse on line 86. The GMP windowed frame pulses are generated by a master timing clock (not shown) of the node 80.

The GMP engine 84 determines the locations of the POH blocks and the GMP padding blocks. The sum of the payload data blocks and the padding blocks per frame in a GMP frame is fixed. The number of payload data blocks and padding blocks per frame is fixed. The mix of payload data blocks and padding blocks per frame is variable and is calculated from the client data rate measured by the clock rate measurement circuit 88. A fixed number of 64B/66B free blocks are inserted from the free insertion block 90 every GMP frame period, regardless of the fill level of the 64B/66B encoded client data blocks in the FIFO buffer 92. The multiplexer controller 94 is controlled by the GMP engine 84 to instruct the multiplexer 96 to select among payload data (64B/66B client data) from the FIFO buffer 92, 64B/66B free blocks from the free insertion block 90, 64B/66B stuff blocks from the stuff insertion block 98, and 64B/66B POH blocks from the POH insertion block 100. The output of the multiplexer 96 is presented to a FlexE calendar slot on line 98.

In the embodiments shown in both fig. 2 and 3, the padding blocks are distributed among the data blocks, rather than being concentrated at one location.

Referring now to fig. 6B, a block diagram illustrates another example of a source node configured in accordance with an aspect of the subject invention. Referring now to fig. 6C, a block diagram illustrates an example of a source node 110 configured in accordance with an aspect of the subject invention. Certain elements of the source node 110 are common to the source node 80 of fig. 6A and will be designated in fig. 6B using the same reference numerals as used for these elements in fig. 6A.

The source node 110 implements the aspects of the invention illustrated in fig. 3. 64B/66B client data is received on line 82. The path layer frame boundaries are generated by a free-running GMP engine 84 without external time alignment. The GMP engine 84 determines the location of the 64B/66B POH blocks and the GMP padding blocks. The number of payload data blocks and 64B/66B pad blocks per frame is fixed. The higher the client rate, the shorter the time it will take to accumulate a client data block of payload 64B/66B within a GMP frame. The lower the client rate, the longer it will take to accumulate a client data block of payload data 64B/66B within a GMP frame. The period of the GMP frame is therefore determined by the bit rate of the incoming 64B/66B client data block on the line 82. The multiplexer controller 94 monitors the fill level of the FIFO buffer 92 accepting 64B/66B client data blocks over the line 82. When the level of the FIFO buffer 92 is low, an additional 64B/66B free block is inserted. When the level of 64B/66B client data blocks in the FIFO buffer 92 is high, a reduced number of 64B/66B free blocks are inserted. 64B/66B idle blocks are inserted between path layer frames. The multiplexer 96 is controlled by the GMP engine 84 to select among the payload data (64B/66B client data blocks) from the FIFO buffer 92, the 64B/66B free blocks from the free insertion block 90, the 64B/66B fill blocks from the fill insertion block 98, and the 64B/66B POH blocks from the POH insertion block 96.

Referring now to fig. 7, a block diagram illustrates an example of an aggregation node 120 configured to receive streams generated by the source nodes shown in fig. 1-3 in accordance with an aspect of the present invention. Incoming FlexE calendar slots carrying client payload streams are received on line 122. The clock rate measurement circuit 124 measures the bit rate of incoming FlexE calendar slots carrying the client payload stream. The rate is scaled by the DSP engine 126 to recover the client payload rate as a function of the number of idles in recovering GMP overhead and counting GMP overhead detected in the idles circuit 128 and the Cm value. The extract client payload block 130 identifies payload, idle, and filler blocks within the GMP frame using the Cm value and idle blocks identified by the recover-GMP overhead and count idle circuitry 128. The 64B/66B fill and free blocks are discarded while the client payload 64B/66B data blocks are written into the FIFO buffer 132. A Phase Locked Loop (PLL)134 is controlled to read from the FIFO buffer 132 on line 136 at the client payload rate. All other blocks in the FlexE calendar slot are discarded.

Those of ordinary skill in the art will appreciate that the intermediate node 14 of fig. 2 is configured in a conventional manner. It is clear from the disclosure herein that the intermediate node only inserts or deletes 64B/66B idle blocks, as known in the art, to synchronize the data flow timing between its input and output rates, regardless of the contents of the 64B/66B data and 64B/66B POH blocks.

The present invention provides several advantages over prior art solutions. The rate adaptation of CBR client signals to the segment layer is located within the path layer signal, not within the segment layer overhead. This results in no impact on the segment layer format. Furthermore, the use of IMP allows GMP to be used to improve performance while making the path signals transparent to the intermediate nodes and therefore having no impact on them. Unlike previous solutions, the present invention allows the use of GMP that is completely contained within the path signal. This provides GMP advantages over IMP/packet solutions, including minimizing the required aggregation FIFO buffer, and simplifying the aggregation recovery of the client clock. The present invention maximizes the server channel bandwidth available for client signals, especially with respect to packet-based solutions.

While embodiments and applications of this invention have been shown and described, it would be apparent to those skilled in the art that many more modifications than mentioned above are possible without departing from the inventive concepts herein. Accordingly, the invention is not limited except as by the spirit of the appended claims.

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:发送装置、接收装置及无线通信系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类