On-the-fly interleaving/rate matching and deinterleaving/de-rate matching for 5G NR

文档序号:1220561 发布日期:2020-09-04 浏览:9次 中文

阅读说明:本技术 用于5g nr的在运行中交织/速率匹配和解交织/解速率匹配 (On-the-fly interleaving/rate matching and deinterleaving/de-rate matching for 5G NR ) 是由 C·刘 A·里索 L·张 于 2018-11-07 设计创作,主要内容包括:在一方面,一种编码数据以供传输的方法包括:对于要在其上执行交织和速率匹配的经编码数据块,由并行操作的第一交织和速率匹配引擎和第二交织和速率匹配引擎分别在缓冲器的第一起始点和第二起始点处开始从该缓冲器读取该经编码数据块。经编码输出数据包括来自两个引擎的经交织和经速率匹配的数据。在另一方面,一种解码接收到的数据的方法包括:由第一解交织和解速率匹配引擎和第二解交织和解速率匹配引擎分别在对数似然比(LLR)缓冲器的第一起始点和第二起始点处开始读取该LLR缓冲器的数据。经解码输出数据包括两个引擎的经解交织和经解速率匹配的数据。(In one aspect, a method of encoding data for transmission includes: for an encoded data block on which interleaving and rate matching are to be performed, the encoded data block is read from a buffer by first and second interleaving and rate matching engines operating in parallel, starting at first and second starting points, respectively, of the buffer. The encoded output data includes interleaved and rate-matched data from both engines. In another aspect, a method of decoding received data includes: reading data of a log-likelihood ratio (LLR) buffer is started at a first start point and a second start point of the LLR buffer by a first deinterleaving and de-rate matching engine and a second deinterleaving and de-rate matching engine, respectively. The decoded output data includes the deinterleaved and de-rate matched data of both engines.)

1. A method of interleaving and rate matching data for transmission, the method comprising:

for an encoded data block on which interleaving and rate matching are to be performed, reading, by a first interleaving and rate matching engine, the encoded data block from a buffer starting at a first starting point of the buffer to generate first interleaved and rate matched data;

reading, by a second interleaving and rate matching engine operating in parallel with the first interleaving and rate matching engine, the encoded data block from the buffer starting at a second starting point of the buffer to generate second interleaved and rate matched data; and

providing encoded output data comprising the first interleaved and rate matched data and the second interleaved and rate matched data.

2. The method of claim 1, comprising:

a code block buffer with Log2(constellation _ size) columns or rows is employed as an interleaver memory; and

n interleaving and rate matching engines equal in number to Log2(constellation _ size) are utilized.

3. The method of claim 2, comprising causing the first interleaving and rate matching engine and the second interleaving and rate matching engine to each run on a different column or row of the interleaver memory.

4. The method of claim 1, comprising causing the first interleaving and rate matching engine and the second interleaving and rate matching engine to each run in parallel with a same encoded bit size and a different start offset.

5. A method of deinterleaving and de-rate matching received data, the method comprising:

reading, by a first deinterleaving and de-rate matching engine, data of a log-likelihood ratio (LLR) buffer starting at a first start point of the LLR buffer to generate first deinterleaved and de-rate matched data;

reading, by a second deinterleaving and de-rate matching engine operating in parallel with the first deinterleaving and de-rate matching engine, data of the LLR buffer starting at a second start point of the LLR buffer to generate second deinterleaved and de-rate matched data; and

providing decoded output data comprising the first deinterleaved and rate matched data and the second deinterleaved and rate matched data.

6. The method of claim 5, comprising:

a code block buffer having Log2(constellation _ size) columns or rows is employed as a deinterleaver memory; and

n deinterleaving and de-rate matching engines equal in number to Log2(constellation _ size) are utilized.

7. The method of claim 6, comprising causing the first and second deinterleaving and rate matching engines to each run on different columns or rows of the deinterleaver memory.

8. The method of claim 5, comprising operating the first deinterleaving and de-rate matching engines and the second deinterleaving and de-rate matching engines in parallel each with a same encoded bit size and a different start offset.

9. The method of claim 5, comprising causing the first deinterleaving and rate matching engine and the second deinterleaving and rate matching engine to write a deinterleaving result in a hybrid automatic repeat request (HARQ) buffer in parallel, thereby combining previous data in the HARQ buffer.

10. An encoder, comprising:

at least one processor;

at least one memory coupled to the at least one processor, wherein the at least one processor is configured to:

for an encoded data block on which interleaving and rate matching are to be performed, reading, by a first interleaving and rate matching engine, the encoded data block from a buffer starting at a first starting point of the buffer to generate first interleaved and rate matched data;

reading, by a second interleaving and rate matching engine operating in parallel with the first interleaving and rate matching engine, the encoded data block from the buffer starting at a second starting point of the buffer to generate second interleaved and rate matched data; and

providing encoded output data comprising the first interleaved and rate matched data and the second interleaved and rate matched data.

11. The encoder of claim 10, wherein the at least one processor is configured to:

a code block buffer with Log2(constellation _ size) columns or rows is employed as an interleaver memory; and

n interleaving and rate matching engines equal in number to Log2(constellation _ size) are utilized.

12. The encoder of claim 11, wherein the at least one processor is configured to: causing the first interleaving and rate matching engine and the second interleaving and rate matching engine to each run on a different column or row of the interleaver memory.

13. The encoder of claim 10, wherein the at least one processor is configured to: causing the first interleaving and rate matching engine and the second interleaving and rate matching engine to each run in parallel with a same encoded bit size and a different start offset.

14. A decoder, comprising:

at least one processor;

at least one memory coupled to the at least one processor, wherein the at least one processor is configured to:

reading, by a first deinterleaving and de-rate matching engine, data of a log-likelihood ratio (LLR) buffer starting at a first start point of the LLR buffer to generate first deinterleaved and de-rate matched data;

reading, by a second deinterleaving and de-rate matching engine operating in parallel with the first deinterleaving and de-rate matching engine, data of the LLR buffer starting at a second start point of the LLR buffer to generate second deinterleaved and de-rate matched data; and

providing decoded output data comprising the first deinterleaved and rate matched data and the second deinterleaved and rate matched data.

15. The decoder of claim 14, wherein the at least one processor is configured to:

a code block buffer having Log2(constellation _ size) columns or rows is employed as a deinterleaver memory; and

n deinterleaving and de-rate matching engines equal in number to Log2(constellation _ size) are utilized.

16. The decoder of claim 15, wherein the at least one processor is configured to: causing the first and second deinterleaving and de-rate matching engines to each run on different columns or rows of the deinterleaver memory.

17. The decoder of claim 14, wherein the at least one processor is configured to: causing the first and second deinterleaving and de-rate matching engines to each run in parallel with the same encoded bit size and different starting offsets.

18. The decoder of claim 14, wherein the at least one processor is configured to: causing the first and second deinterleaving and rate matching engines to write a deinterleaving result in a hybrid automatic repeat request (HARQ) buffer in parallel, thereby combining previous data in the HARQ buffer.

Technical Field

Aspects of the present disclosure relate generally to wireless communication systems and, more particularly, to interleaving/rate matching and deinterleaving/rate matching for 5G NR. Some embodiments of the techniques discussed below implement and provide techniques for saving memory, reducing the size of design footprint, and on-the-fly operating conditions for reduced buffer space.

Introduction to the design reside in

Wireless communication networks are widely deployed to provide various communication services such as voice, video, packet data, messaging, broadcast, and so on. These wireless networks may be multiple-access networks capable of supporting multiple users by sharing the available network resources. Such networks, typically multiple-access networks, support communication for multiple users by sharing the available network resources.

A wireless communication network may include several base stations or node bs capable of supporting communication for several User Equipments (UEs). A UE may communicate with a base station via the downlink and uplink. The downlink (or forward link) refers to the communication link from the base stations to the UEs, and the uplink (or reverse link) refers to the communication link from the UEs to the base stations.

A base station may transmit data and control information to a UE on the downlink and/or may receive data and control information from a UE on the uplink. On the downlink, transmissions from a base station may encounter interference due to transmissions from neighbor base stations or from other wireless Radio Frequency (RF) transmitters. On the uplink, transmissions from a UE may encounter uplink transmissions from other UEs communicating with neighbor base stations or interference from other wireless RF transmitters. This interference may degrade performance on both the downlink and uplink.

As the demand for mobile broadband access continues to increase, the likelihood of interference and congested networks continues to increase as more UEs access the long-range wireless communication network and more short-range wireless systems are being deployed in the community. Research and development continue to advance wireless communication technologies not only to meet the ever-increasing demand for mobile broadband access, but also to enhance and enhance the user experience with mobile communications.

Brief summary of some embodiments

The following presents a simplified summary of some aspects of the disclosure in order to provide a basic understanding of the discussed technology. This summary is not an extensive overview of all contemplated features of the disclosure, and is intended to neither identify key or critical elements of all aspects of the disclosure, nor delineate the scope of any or all aspects of the disclosure. Its sole purpose is to present some concepts of one or more aspects of the disclosure in a general form as a prelude to the more detailed description that is presented later.

In the 5G specification, interleaving occurs after rate matching on the transmit side, whereas in LTE, rate matching occurs before interleaving. There are two schemes in interleaving: perforating and repeating. In the case of a repetitive interleaving scheme, the result of implementing interleaving after rate matching is that the incoming log-likelihood ratios (LLRs) that are interleaved and repeated need to be buffered in a conventional manner before deinterleaving and then deduplication is performed. For the typically large number of repetitions, this buffering requires a very large buffer, resulting in a considerable area penalty for the circuitry implementation.

In one aspect of the disclosure, a method of interleaving and rate matching for transmission is provided. For example, for an encoded data block on which interleaving and rate matching are to be performed, a method may comprise: the encoded data block is read from the buffer by the first interleaving and rate matching engine starting at a first starting point of the buffer, thereby generating first interleaved and rate matched data. Additionally, the method may include: reading, by a second interleaving and rate matching engine operating in parallel with the first interleaving and rate matching engine, the encoded data block from the buffer starting at a second starting point of the buffer, thereby generating second interleaved and rate matched data. The method may further comprise: encoded output data is provided that includes first interleaved and rate matched data and second interleaved and rate matched data.

In another aspect, a method of decoding received data is provided. For example, a method may comprise: reading, by a first deinterleaving and de-rate matching engine, data of a log-likelihood ratio (LLR) buffer starting at a first starting point of the LLR buffer, thereby generating first deinterleaved and de-rate matched data. The method may additionally comprise: reading the data of the LLR buffer is started at a second starting point of the LLR buffer by a second deinterleaving and de-rate matching engine operating in parallel with the first deinterleaving and de-rate matching engine, thereby generating second deinterleaved and de-rate matched data. The method may further comprise: encoded output data is provided that includes the first deinterleaved and rate matched data and the second deinterleaved and rate matched data.

In another aspect, an encoder is provided. For example, an encoder may include: at least one processor and at least one memory coupled to the at least one processor. For an encoded data block on which interleaving and rate matching are to be performed, the at least one processor is configured to: reading, by a first interleaving and rate matching engine, the encoded data block from the buffer starting at a first starting point of the buffer, thereby generating first interleaved and rate matched data. The at least one processor is additionally configured to: reading, by a second interleaving and rate matching engine operating in parallel with the first interleaving and rate matching engine, the encoded data block from the buffer starting at a second starting point of the buffer, thereby generating second interleaved and rate matched data. The at least one processor is further configured to: encoded output data is provided that includes first interleaved and rate matched data and second interleaved and rate matched data.

In another aspect, a decoder is provided. For example, a decoder may have: at least one processor and at least one memory coupled to the at least one processor. The at least one processor is configured to: reading, by a first deinterleaving and de-rate matching engine, data of a log-likelihood ratio (LLR) buffer starting at a first starting point of the LLR buffer, thereby generating first deinterleaved and de-rate matched data. The at least one processor is additionally configured to: reading the data of the LLR buffer is started at a second starting point of the LLR buffer by a second deinterleaving and de-rate matching engine operating in parallel with the first deinterleaving and de-rate matching engine, thereby generating second deinterleaved and de-rate matched data. The at least one processor is further configured to: encoded output data is provided that includes the first deinterleaved and rate matched data and the second deinterleaved and rate matched data.

Other aspects, features and embodiments of the present invention will become apparent to those ordinarily skilled in the art upon review of the following description of specific exemplary embodiments of the invention in conjunction with the accompanying figures. While features of the invention may be discussed below with respect to certain embodiments and figures, all embodiments of the invention can include one or more of the advantageous features discussed herein. In other words, while one or more embodiments may have been discussed as having certain advantageous features, one or more such features may also be used in accordance with the various embodiments of the invention discussed herein. In a similar manner, although example embodiments may be discussed below as device, system, or method embodiments, it should be appreciated that such example embodiments may be implemented in a variety of devices, systems, and methods.

Brief Description of Drawings

A further understanding of the nature and advantages of the present disclosure may be realized by reference to the following drawings. In the drawings, similar components or features may have the same reference numerals. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.

Fig. 1 is a block diagram illustrating details of a wireless communication system in accordance with some embodiments of the present disclosure.

Fig. 2 is a block diagram conceptually illustrating a design of a base station/gNB and a UE configured according to some embodiments of the present disclosure.

Fig. 3A and 3B are block diagrams conceptually illustrating memories of interleavers and/or deinterleavers, according to some embodiments of the present disclosure.

Figure 4 is a block diagram conceptually illustrating traversal of a circular interleaver or deinterleaver memory buffer by multiple engines, according to some embodiments of the present disclosure.

Fig. 5 is a block diagram conceptually illustrating another memory of an interleaver and/or deinterleaver in accordance with some embodiments of the present disclosure.

Fig. 6 is a block diagram conceptually illustrating a decoder according to some embodiments of the present disclosure.

Fig. 7A is a block diagram illustrating a first portion of a decoder according to some embodiments of the present disclosure.

Fig. 7B is a block diagram illustrating a second portion of the decoder of fig. 7A, according to some embodiments of the present disclosure.

Fig. 7C is a block diagram illustrating a MIMO _ FIFO memory register utilized by the engines of the decoders of fig. 7A and 7B.

Fig. 8 is a block diagram illustrating an encoder in accordance with some embodiments of the present disclosure.

Fig. 9 is a block diagram illustrating example blocks of an encoding method according to some embodiments of the present disclosure.

Fig. 10 is a block diagram illustrating example blocks of a decoding method according to some embodiments of the present disclosure.

Detailed Description

The detailed description set forth below in connection with the appended drawings is intended as a description of various possible configurations and is not intended to limit the scope of the present disclosure. Rather, the detailed description includes specific details for the purpose of providing a thorough understanding of the subject matter of the invention. It will be apparent to one skilled in the art that these specific details are not required in every case, and that in some instances, well-known structures and components are shown in block diagram form for clarity of presentation.

The present disclosure relates generally to providing or participating in communications between two or more wireless devices in one or more wireless communication systems (also referred to as wireless communication networks). In various embodiments, the techniques and apparatus may be used for wireless communication networks such as Code Division Multiple Access (CDMA) networks, Time Division Multiple Access (TDMA) networks, Frequency Division Multiple Access (FDMA) networks, orthogonal FDMA (ofdma) networks, single carrier FDMA (SC-FDMA) networks, Long Term Evolution (LTE) networks, global system for mobile communications (GSM) networks, and other communication networks. As described herein, the terms "network" and "system" may be used interchangeably depending on the particular context.

A CDMA network may implement radio technologies such as Universal Terrestrial Radio Access (UTRA), CDMA2000, and so on, for example. UTRA includes wideband CDMA (W-CDMA) and Low Chip Rate (LCR). CDMA2000 covers IS-2000, IS-95 and IS-856 standards.

A TDMA network may, for example, implement a radio technology such as GSM. The 3GPP defines a standard for the GSM EDGE (enhanced data rates for GSM evolution) Radio Access Network (RAN), also denoted GERAN. GERAN is the radio component of GSM/EDGE along with a network that interfaces base stations (e.g., the Ater and Abis interfaces) with a base station controller (a interface, etc.). The radio access network represents the component of the GSM network through which telephone calls and packet data are routed from the Public Switched Telephone Network (PSTN) and the internet to and from subscriber handsets, also known as user terminals or User Equipment (UE). The network of the mobile telephone operator may comprise one or more GERAN, which in the case of a UMTS/GSM network may be coupled with a Universal Terrestrial Radio Access Network (UTRAN). The operator network may also include one or more LTE networks, and/or one or more other networks. The various different network types may use different Radio Access Technologies (RATs) and Radio Access Networks (RANs).

An OFDMA network may implement radio technologies such as evolved UTRA (E-UTRA), IEEE 802.11, IEEE802.16, IEEE802.20, flash-OFDM, etc., for example. UTRA, E-UTRA and GSM are part of the Universal Mobile Telecommunications System (UMTS). Specifically, LTE is a UMTS release that uses E-UTRA. UTRA, E-UTRA, GSM, UMTS and LTE are described in documents from an organization named "third generation partnership project" (3GPP), while cdma2000 is described in documents from an organization named "third generation partnership project 2" (3GPP 2). These various radio technologies and standards are known or under development. For example, the third generation partnership project (3GPP) is a collaboration between groups of telecommunications associations that is intended to define the globally applicable third generation (3G) mobile phone specification. The 3GPP Long Term Evolution (LTE) is a 3GPP project aimed at improving the Universal Mobile Telecommunications System (UMTS) mobile phone standard. The 3GPP may define specifications for next generation mobile networks, mobile systems, and mobile devices.

For clarity, certain aspects of the apparatus and techniques may be described below with reference to an exemplary LTE implementation or in an LTE-centric manner, and LTE terminology may be used as an illustrative example in portions of the following description; however, the description is not intended to be limited to LTE applications. Indeed, the present disclosure concerns shared access to wireless spectrum between networks using different radio access technologies or radio air interfaces.

Further in operation, a wireless communication network adapted according to the concepts herein may operate with any combination of licensed or unlicensed spectrum depending on load and availability. Thus, it will be apparent to those skilled in the art that the systems, apparatus and methods described herein may be applied to other communication systems and applications other than the specific examples provided.

Although aspects and embodiments are described herein by way of illustration of some examples, those skilled in the art will appreciate that additional implementations and use cases may be generated in many different arrangements and scenarios. The innovations described herein may be implemented across many different platform types, devices, systems, shapes, sizes, packaging arrangements. For example, embodiments and/or uses can be produced via integrated chip embodiments and/or other non-module component-based devices (e.g., end-user devices, vehicles, communication devices, computing devices, industrial equipment, retail/purchase devices, medical devices, AI-enabled devices, etc.). While some examples may or may not be specific to each use case or application, broad applicability of the described innovations may occur. Implementations may range from chip-level or modular components to non-module, non-chip-level implementations, and further to distributed or OEM devices or systems incorporating aggregation of one or more of the described aspects. In some practical settings, a device incorporating the described aspects and features may also necessarily include additional components and features for implementing and practicing the claimed and described embodiments. The innovations described herein are intended to be practiced in a wide variety of implementations, including both large/small devices of different sizes, shapes, and compositions, chip-level components, multi-component systems (e.g., RF chains, communication interfaces, processors), distributed arrangements, end-user devices, and so forth.

Fig. 1 illustrates a wireless network 100 for communication in accordance with some embodiments. Although discussion of the techniques of this disclosure is provided with respect to an LTE-a network (shown in fig. 1), this is for illustration purposes. The principles of the disclosed technology may be used in other network deployments, including fifth generation (5G) networks. As will be appreciated by those skilled in the art, the components appearing in fig. 1 are likely to have relevant counterparts in other network arrangements, including, for example, cellular network arrangements and non-cellular network arrangements (e.g., device-to-device or peer-to-peer or ad hoc network arrangements, etc.).

Wireless network 100 may include several base stations, such as may include evolved node bs (enbs) or G B nodes (gnbs). These may be referred to as gnbs 105. The gNB may be a station that communicates with the UE and may also be referred to as a base station, a node B, an access point, and so on. Each gNB 105 may provide communication coverage for a particular geographic area. In 3GPP, the term "cell" can refer to a particular geographic coverage area of a gNB and/or a gNB subsystem serving that coverage area, depending on the context in which the term is used. In implementations of the wireless network 100 herein, the gNB 105 may be associated with the same operator or different operators (e.g., the wireless network 100 may include multiple operator wireless networks) and may provide wireless communication using one or more of the same frequencies as neighboring cells (e.g., one or more frequency bands in a licensed spectrum, an unlicensed spectrum, or a combination thereof).

The gNB may provide communication coverage for a macro cell or a small cell (such as a pico cell or a femto cell), and/or other types of cells. A macro cell generally covers a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by UEs with service subscriptions with the network provider. Small cells, such as picocells, typically cover a relatively small geographic area and may allow unrestricted access by UEs with service subscriptions with the network provider. Small cells, such as femtocells, typically also cover a relatively small geographic area (e.g., a home), and may provide, in addition to unrestricted access, restricted access for UEs associated with the femtocell (e.g., UEs in a Closed Subscriber Group (CSG), UEs of users in the home, etc.). The gbb for a macro cell may be referred to as a macro gbb. A gNB for a small cell may be referred to as a small cell gNB, pico gNB, femto gNB, or home gNB. In the example shown in fig. 1, the gnbs 105a, 105b, and 105c are macro gnbs of the macro cells 110a, 110b, and 110c, respectively. The gnbs 105x, 105y, and 105z are small cell gnbs, which may include pico or femto gnbs that provide service to small cells 110x, 110y, and 110z, respectively. The gNB may support one or more (e.g., two, three, four, etc.) cells.

Wireless network 100 may support synchronous or asynchronous operation. For synchronous operation, the gnbs may have similar frame timing, and transmissions from different gnbs may be approximately aligned in time. For asynchronous operation, the gnbs may have different frame timings, and transmissions from different gnbs may not be aligned in time. In some scenarios, the network may be implemented or configured to handle dynamic switching between synchronous or asynchronous operations.

UEs 115 are dispersed throughout wireless network 100, and each UE may be stationary or mobile. It should be appreciated that although mobile devices are commonly referred to as User Equipment (UE) in standards and specifications promulgated by the 3 rd generation partnership project (3GPP), such devices may also be referred to by those skilled in the art as Mobile Stations (MS), subscriber stations, mobile units, subscriber units, wireless units, remote units, mobile devices, wireless communication devices, remote devices, mobile subscriber stations, Access Terminals (AT), mobile terminals, wireless terminals, remote terminals, handsets, terminals, user agents, mobile clients, or some other suitable terminology. Within this document, a "mobile" device or UE need not have the capability to move, and may be stationary. Some non-limiting examples of mobile devices, such as embodiments that may include one or more of the UEs 115, include mobile stations, cellular (cell) phones, smart phones, Session Initiation Protocol (SIP) phones, laptops, Personal Computers (PCs), notebooks, netbooks, smartbooks, tablets, and Personal Digital Assistants (PDAs). The mobile device may additionally be an "internet of things" (IoT) device, such as an automobile or other transportation vehicle, a satellite radio, a Global Positioning System (GPS) device, a logistics controller, a drone, a multi-axis aircraft, a quadcopter, a smart energy or security device, a solar panel or solar array, city lighting, water or other infrastructure; industrial automation and enterprise equipment; consumer and wearable devices, such as glasses, wearable cameras, smart watches, health or fitness trackers, mammalian implantable devices, gesture tracking devices, medical devices, digital audio players (e.g., MP3 players), cameras, game consoles, and so forth; and digital home or smart home devices such as home audio, video and multimedia devices, appliances, sensors, vending machines, smart lighting, home security systems, smart meters, and the like. A mobile device (such as UE 115) may be capable of communicating with a macro gNB, pico gNB, femto gNB, relay, and/or the like. In fig. 1, the lightning bolt (e.g., communication link 125) indicates a wireless transmission between the UE and a serving gNB, or a desired transmission between the gNB, which is a gNB designated to serve the UE on the downlink and/or uplink. Although the backhaul communication 134 is illustrated as a wired backhaul communication that may occur between the gnbs, it should be appreciated that the backhaul communication may additionally or alternatively be provided by wireless communication.

Fig. 2 shows a block diagram of a design of a base station/gNB 105 and a UE 115. These may be one of the base stations/gbbs in fig. 1 and one of the UEs in fig. 1. For a restricted association scenario (as mentioned above), the gNB 105 may be the small cell gNB 105z in fig. 1 and the UE 115 may be the UE 115z, and to access the small cell gNB 105z, the UE 115 may be included in an accessible UE list of the small cell gNB 105 z. The gNB 105 may also be some other type of base station. The gNB 105 may be equipped with antennas 234a through 234t and the UE 115 may be equipped with antennas 252a through 252 r.

At the gNB 105, a transmit processor 220 may receive data from a data source 212 and control information from a controller/processor 240. The control information may be used for a Physical Broadcast Channel (PBCH), a Physical Control Format Indicator Channel (PCFICH), a physical hybrid ARQ indicator channel (PHICH), a Physical Downlink Control Channel (PDCCH), etc. The data may be for a Physical Downlink Shared Channel (PDSCH), etc. Transmit processor 220 may process (e.g., encode and symbol map) the data and control information to obtain data symbols and control symbols, respectively. Transmit processor 220 may also generate reference symbols (e.g., for a Primary Synchronization Signal (PSS), a Secondary Synchronization Signal (SSS), and a cell-specific reference signal (CRS)). A Transmit (TX) multiple-input multiple-output (MIMO) processor 230 may perform spatial processing (e.g., precoding) on the data symbols, the control symbols, and/or the reference symbols, if applicable, and may provide output symbol streams to Modulators (MODs) 232a through 232 t. Each modulator 232 may process a respective output symbol stream (e.g., for OFDM, etc.) to obtain an output sample stream. Each modulator 232 may additionally or alternatively process (e.g., convert to analog, amplify, filter, and upconvert) the output sample stream to obtain a downlink signal. Downlink signals from modulators 232a through 232t may be transmitted via antennas 234a through 234t, respectively.

At UE 115, antennas 252a through 252r may receive downlink signals from the gNB 105 and may provide received signals to demodulators (DEMODs) 254a through 254r, respectively. Each demodulator 254 may condition (e.g., filter, amplify, downconvert, and digitize) a respective received signal to obtain input samples. Each demodulator 254 may further process the input samples (e.g., for OFDM, etc.) to obtain received symbols. A MIMO detector 256 may obtain received symbols from all demodulators 254a through 254r, perform MIMO detection on the received symbols if applicable, and provide detected symbols. A receive processor 258 may process (e.g., demodulate, deinterleave, and decode) the detected symbols, provide decoded data for the UE 115 to a data sink 260, and provide decoded control information to a controller/processor 280.

On the uplink, at UE 115, a transmit processor 264 may receive and process data (e.g., for the PUSCH) from a data source 262 and control information (e.g., for the PUCCH) from a controller/processor 280. Transmit processor 264 may also generate reference symbols for a reference signal. The symbols from transmit processor 264 may be precoded by a TX MIMO processor 266 if applicable, further processed by modulators 254a through 254r (e.g., for SC-FDM, etc.), and transmitted to the gNB 105. At the gNB 105, the uplink signals from the UE 115 may be received by the antennas 234, processed by the demodulators 232, detected by a MIMO detector 236 if applicable, and further processed by a receive processor 238 to obtain decoded data and control information sent by the UE 115. Processor 238 may provide the decoded data to a data sink 239 and the decoded control information to controller/processor 240.

Controllers/processors 240 and 280 may direct operation at gNB 105 and UE 115, respectively. Controller/processor 240 and/or other processors and modules at gNB 105 and/or controller/processor 280 and/or other processors and modules at UE 115 may perform or direct the performance of various processes for the techniques described herein, such as to perform or direct the performance illustrated in fig. 6, 7A, 7B, 7C, 8, 9, and 10 and/or other processes for the techniques described herein. Memories 242 and 282 may store data and program codes for gNB 105 and UE 115, respectively. A scheduler 244 may schedule UEs for data transmission on the downlink and/or uplink.

As mentioned above, in the 5G specification, interleaving is handled differently from LTE. As currently envisaged, in the 5G NR standard, interleaving occurs after rate matching on the transmit side. In contrast, in LTE, rate matching occurs before interleaving on the transmit side. Implementing interleaving after rate matching and employing a repetition rate matching scheme as in 5G results in the need to buffer the incoming LLRs. Given the typically large number of repetitions, this buffering requires a very large buffer, resulting in a considerable area penalty for the circuitry implementation. For example, the number of LLRs may exceed 150 ten thousand. In the case of six bits per LLR, more than 900 ten thousand bits of buffer memory are required. Therefore, an efficient method and apparatus may be desired to perform deinterleaving and de-repeating on the fly to avoid incurring the memory region penalty. As set forth below, the present disclosure provides a technique that may reduce the memory area required to buffer LLRs by more than 98%. For example, the memory area may be as small as a single code block, i.e. 3 x 8448 LLRs. In this case, multiple memory banks are not required, as conventional code block buffers can be used to meet this memory requirement.

The techniques discussed in this disclosure may address this challenge by introducing new interleaving/rate matching and deinterleaving/de-rate matching techniques. For example, on the transmit side (at the transmitter), multiple interleavers and rate matching engines operate in parallel to access the code block buffer at different starting points. At the receiving side (at the receiver), multiple de-interleavers and de-rate matching engines operate in parallel to process the LLRs for each demodulated symbol at different offsets as produced by interleaving and rate matching at the transmitting side. This arrangement advantageously reduces the amount of interleaver memory required to buffer the LLRs, as described above. For example, using ten deinterleaver and de-rate matching engines can reduce the required number of interleaver memories to as small as a single code block, i.e., 3 x 8448 LLRs. Thus, the required memory size is reduced to a fraction of the memory size required without utilizing the multiple engine encoding/decoding techniques presented herein.

It is contemplated that an interleaver having multiple engines may be configured in various ways. According to one embodiment, the interleaver may be a rectangular interleaver with N rows, where N is Log2(constellation _ size). If each row is considered an independent rate matching engine, ten independent rate matching engines may run in parallel at different starting offsets for up to a QAM 1024 constellation. For rate matching and interleaving (i.e., transmit side) on the fly, ten engines may independently read the same code block buffer at different points. For on-the-fly de-rate matching and de-interleaving (i.e., the receiving side), ten engines may simultaneously write de-rate matched results in the same HARQ buffer when combining previous data in the HARQ buffer on the fly. One resulting advantage is a savings of 98% or more of interleaver/deinterleaver memory size. Another resulting advantage is that no multi-bank memory is required, since any conventional block buffer can be used as deinterleaver memory. Examples involving three or more engines, six engines, eight engines, and ten engines are given herein.

It should be understood, however, that since QAM 1024 is required to be supported in the 5G standard, it is currently preferred to use ten engines (i.e., encoding or decoding modules on the fly), although any number of two or more engines may be utilized. As a general rule, the number of engines may be greater than or equal to Log2(constellation _ size). The constellation size may be as small as four, in which case the number of engines used may be two.

Turning to fig. 3A, an example deinterleaver memory 300A for use with eight engines may be configured as a rectangular memory with a QAM256 constellation having 32 encoded bits and 21 columns. The LLRs for each demodulated symbol on which to perform deinterleaving and de-rate matching include those LLRs shown in the respective columns of the table illustrated in memory 300A (e.g., [ 02110312093019 ], [ 12211021103120 ], …). The eight engines may each be configured to run in parallel on a single line having the same encoded bit size 32 but with different start offsets. For example, the offset may be determined according to:

start offset ═ (line index x column)% (encoded bits)

Where the line index is 0, 1, 2, …, 9. It is also contemplated that inter-engine combining may occur only when the number of coded bits is greater than or equal to the number of columns (i.e., row length) of deinterleaver memory 300A. Turning briefly to fig. 5, however, another example deinterleaver memory 500 configured for use with six engines indicates that intra-Transmission Time Interval (TTI) combining and line engine combining can be performed when the number of coded bits is less than the number of columns (i.e., row length) of the deinterleaver memory 500.

Although the foregoing examples show starting from 0 with respect to the starting LLR, it should be understood that embodiments may start from any number. For example, in operation according to embodiments of the present disclosure, the retransmitted LLRs may start at any position, such as being offset by an amount of e _ offset. Thus, the starting point for each line engine in such embodiments may follow the following equation:

start offset (e _ offset + line index column)% (encoded bits),

where the line index is 0, 1, 2, …, 9. Fig. 3B shows an example deinterleaver memory 300B for use with eight engines in an example where e _ offset is 16, number of columns is 21, and encoded bits are 32.

Turning now to fig. 4, it is contemplated that eight deinterleaving and de-rate matching engines may be used. These engines may be combined around a circular deinterleaver memory starting at offset zero and output the final deinterleaving and rate matching results 400. To reduce memory usage, all engines may be configured to run on the same circular HARQ buffer of width 32 LLRs. Each engine may be configured to access the HARQ buffer only once every 32 cycles. The HARQ buffer has sufficient memory bandwidth to support ten engines performing read/write operations and combining on the fly by performing read/combine/write operations on each memory line.

Turning now to fig. 6, deinterleaver/de-rate matcher 600 has up to ten deinterleaving and de-rate matching engines (illustrated by engines 602A, 602B, and 602C) each having their own set of adders for performing a combination of the de-rate matching results and data stored in HARQ buffer 604. In operation, the reader 606 reads 32 LLRs at a time from the LLR buffer 608 and distributes two LLRs to each of the engines 602A-602C in each cycle. In turn, the read component RDR and write component WRT of each of the engines 602A-602C access the HARQ buffer 604 via the arbitration block 610, and each engine performs read, combine, and write operations in the HARQ buffer 604 at different starting points.

Turning now to fig. 7A, another implementation of a deinterleaver/de-rate matcher has up to ten deinterleaving and de-rate matching engines (illustrated by engines 702A, 702B, and 702C) that all share an adder for performing a combination of the de-rate matching results and data stored in a HARQ buffer. Reader 706 provides two LLRs to each of engines 702A-702C in each cycle, and engines 702A-702C store the LLRs in MIMO _ FIFO registers that are two LLRs wide and 32 LLRs long each. The deinterleaving components of engines 702A-702C operate on the received LLRs over a number of cycles to perform a deinterleaving operation and obtain a deinterleaved result that is 32 LLRs in length.

Turning now to fig. 7B, these engines take turns accessing the HARQ buffer 704 at different starting points and use adders 712A and 712B to combine the deinterleaved results 714 with data 716 already in the buffer 704. Thus, each engine writes the deinterleave and de-rate match results 718 to the correct address in the HARQ buffer 704.

Turning now to fig. 7C, the ring register 720 employed by each engine may operate as a MIMO _ FIFO memory 722 having various inputs and outputs. For example, an output signal of MIMO _ FIFO memory 722 may report the amount of space available, n _ space, and the number of words stored, n _ words, of memory 722. Additionally, the input signal may include the number of words to be written wr _ num, a data input din, and a pulse wr _ req triggering writing of data din into the ring register 720. Further, the input and output signals may include the number of words rd _ num to be read out, a data output dout, and a pulse rd _ req triggering the reading of data dout from the ring register 720. Each engine may be provided with its own ring register.

Turning now to fig. 8, the interleaver/rate matcher 800 has up to ten interleaving and rate matching engines (illustrated by engines 802A, 802B, and 802C) that each read multiple copies of data 804 from different starting points in a code block buffer 806. The input 32 LLRs are stored in respective MIMO-FIFO buffers of the multiple engines 802A-802C, and each of these engines provides two LLRs per cycle to the transmit buffer via writer block 808.

Turning now to fig. 9, a method of performing data interleaving and rate matching for transmission includes: the input data to the code block buffer is read at block 900. The input data read at block 900 is read by the first interleaving and rate matching engine starting at the first starting point of the code block buffer. As a result, the first interleaving and rate matching engine generates first interleaved and rate matched data at block 900. Processing may proceed from block 900 to block 902.

At block 902, the input data to the code block buffer is read again. The input data read at block 900 is read by the second interleaving and rate matching engine starting at the second starting point of the code block buffer. As a result, the second interleaving and rate matching engine generates second interleaved and rate matched data at block 902. Processing may proceed from block 902 to block 904.

At block 904, encoded data comprising the first interleaved and rate matched data and the second interleaved and rate matched data is provided. For example, block 904 may include writing the encoded output data to a transmit buffer. Processing may return from block 904 to block 900.

Blocks 900 and 902 may include employing a rectangular interleaver and/or employing a code block buffer with Log2(constellation _ size) lines as interleaver memory. Additionally, it should be understood that blocks 900 and 902 may include having the first and second interleaving and rate matching engines each run on different columns or rows of the interleaver memory. It should be further understood that blocks 900 and 902 may include the first interleaving and rate matching engine and the second interleaving and rate matching engine each running in parallel with the same encoded bit size and different starting offsets. It should be further understood that additional blocks may be included that involve utilizing N interleaving and rate matching engines equal in number to Log2(constellation _ size).

Referring now to fig. 10, a method of processing received data includes: input data to a log-likelihood ratio (LLR) buffer is read at block 1000. The reading at block 100 is performed by the first deinterleaving and de-rate matching engine starting at the first start of the LLR buffer. As a result, first deinterleaved and de-rate matched data is generated at block 1000. Processing may proceed from block 1000 to block 1002.

At block 1002, the input data to the LLR buffer is read by a second deinterleaving and de-rate matching engine operating in parallel with the first deinterleaving and de-rate matching engine. The reading is performed starting at the second starting point of the LLR buffer. As a result, second deinterleaved and de-rate matched data is generated at block 1002. Processing may proceed from block 1002 to block 1004.

At block 1004, deinterleaved and de-rate matched data is provided. The provided data includes first deinterleaved and rate matched data and second deinterleaved and rate matched data. For example, block 1004 may include causing a first deinterleaving and de-rate matching engine and a second deinterleaving and de-rate matching engine to write de-rate matching results in a hybrid automatic repeat request (HARQ) buffer in parallel, thereby combining previous data in the HARQ buffer.

Embodiments may also include alternative arrangements and features. For example, blocks 1000 and 1002 may include employing a rectangular interleaver and/or employing a code block buffer with Log2(constellation _ size) lines as interleaver memory. Further, blocks 1000 and 1002 may include having the first interleaving and rate matching engine and the second interleaving and rate matching engine each run on different columns or rows of the interleaver memory. As another example, blocks 1000 and 1002 may include having the first interleaving and rate matching engine and the second interleaving and rate matching engine each run in parallel with the same encoded bit size and different starting offsets. As yet another example, an additional block may be included that involves utilizing N deinterleaving and de-rate matching engines equal in number to Log2(constellation _ size).

Those of skill in the art would understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.

The functional blocks and modules described herein (e.g., in fig. 2 and 6-10) may comprise processors, electronics devices, hardware devices, electronic components, logic circuits, memories, software codes, firmware codes, etc., or any combination thereof.

Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure. Skilled artisans will also readily recognize that the order or combination of components, methods, or interactions described herein is merely an example and that the components, methods, or interactions of the various aspects of the disclosure may be combined or performed in a manner different from that illustrated and described herein.

The various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.

The steps of a method or algorithm described in connection with the disclosure herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.

In one or more exemplary designs, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media, including any medium that facilitates transfer of a computer program from one place to another. Computer-readable storage media can be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of instructions or data structures and which can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, a connection may also be properly termed a computer-readable medium. For example, if the software is transmitted from a web site, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, or Digital Subscriber Line (DSL), then the coaxial cable, fiber optic cable, twisted pair, or DSL are included in the definition of medium. Disk (disk) and disc (disc), as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), hard disk, solid state disc, and blu-ray disc where disks (disks) usually reproduce data magnetically, while discs (discs) reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.

As used herein, including in the claims, the term "and/or" as used in a listing of two or more items means that any one of the listed items can be employed alone, or any combination of two or more of the listed items can be employed. For example, if a composition is described as comprising component A, B and/or C, the composition may comprise only a; only B; only C; a combination of A and B; a combination of A and C; a combination of B and C; or a combination of A, B and C. Also, as used herein, including in the claims, "or" as used in a list of items prefaced by "at least one of indicates a disjunctive list, such that, for example, a list of" A, B or at least one of C "represents any of a or B or C or AB or AC or BC or ABC (i.e., a and B and C) or any combination thereof.

The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

27页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:全双工网络中的全双工扩展器

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类