Method and apparatus for reducing scheduling delay in wireless communication system

文档序号:75150 发布日期:2021-10-01 浏览:43次 中文

阅读说明:本技术 用于在无线通信系统中减少调度时延的方法和装置 (Method and apparatus for reducing scheduling delay in wireless communication system ) 是由 M.特萨诺维奇 白祥圭 于 2020-02-07 设计创作,主要内容包括:在无线通信系统中的第二网络节点包括收发器和至少一个处理器,该至少一个处理器被配置为:控制收发器以从第一网络节点接收第一调度请求(SR)和第一缓冲器状态报告(BSR)中的至少一个;控制收发器以在从第一网络节点接收对应于第一SR或第一BSR的要发送的数据之前,基于SR或第一BSR向第三网络节点发送第二SR和第二BSR中的至少一个;控制收发器以从第三网络节点接收对应于第二SR或第二BSR的第一上行链路(UL)授权;控制收发器以从第一网络节点接收数据;以及控制收发器以向第三网络节点发送数据。(The second network node in the wireless communication system comprises a transceiver and at least one processor configured to: controlling a transceiver to receive at least one of a first Scheduling Request (SR) and a first Buffer Status Report (BSR) from a first network node; control the transceiver to transmit at least one of a second SR and a second BSR to a third network node based on the SR or the first BSR prior to receiving data to be transmitted corresponding to the first SR or the first BSR from the first network node; control the transceiver to receive a first Uplink (UL) grant corresponding to the second SR or the second BSR from a third network node; controlling a transceiver to receive data from a first network node; and control the transceiver to transmit data to the third network node.)

1. A method performed by a second network node in a wireless communication system, the method comprising:

receiving at least one of a first Scheduling Request (SR) and a first Buffer Status Report (BSR) from a first network node;

prior to receiving data to be transmitted corresponding to the first SR or the first BSR from the first network node, transmitting at least one of a second SR and a second BSR to a third network node based on the first SR or the first BSR;

receive a first Uplink (UL) grant corresponding to the second SR or the second BSR from the third network node;

receiving the data from the first network node; and

transmitting the data to the third network node.

2. The method of claim 1, further comprising: transmitting a second UL grant corresponding to the first SR or the first BSR to the first network node,

wherein transmitting at least one of the second SR and the second BSR to the third network node comprises transmitting at least one of the second SR and the second BSR based on the second UL grant.

3. The method of claim 1, wherein transmitting the second BSR to the third network node comprises:

determining whether to transmit the second BSR based on at least one of the first BSR and an SR configuration of the first SR; and

transmitting the second BSR based on the determination result.

4. The method of claim 3, wherein determining whether to transmit the second BSR comprises:

determining whether to transmit the second BSR based on a priority of the data to be transmitted.

5. The method of claim 4, wherein determining whether to transmit the second BSR based on the priority of the data to be transmitted comprises:

determining to transmit the second BSR if the priority of the data to be transmitted is higher than the priority of the data present in the buffer of the second network node.

6. The method of claim 3, wherein determining whether to transmit the second BSR comprises:

determining whether to transmit the second BSR based on a number of hops required for the data to reach a destination of the data.

7. The method of claim 1, wherein the second BSR includes buffer status information of the second network node corresponding to the data to be transmitted.

8. The method of claim 7, wherein the second BSR is assigned a Logical Channel Identifier (LCID) that is different from another LCID assigned to another BSR, and

wherein the other BSR includes buffer status information of the second network node corresponding to data currently present in the buffer.

9. The method of claim 1, wherein transmitting the second BSR to the third network node comprises determining to transmit the second BSR based on a priority of the data to be transmitted.

10. The method of claim 1, wherein transmitting the second BSR to the third network node includes determining to transmit the second BSR based on a number of hops required for the data to reach a destination of the data.

11. A second network node in a wireless communication system, the second network node comprising:

a transceiver; and

at least one processor configured to:

control the transceiver to receive at least one of a first Scheduling Request (SR) and a first Buffer Status Report (BSR) from a first network node;

control the transceiver to transmit at least one of a second SR and a second BSR to a third network node based on the first SR and the first BSR prior to receiving data to be transmitted corresponding to the first SR or the first BSR from the first network node;

control the transceiver to receive a first Uplink (UL) grant from the third network node corresponding to the second SR or the second BSR;

control the transceiver to receive the data from the first network node; and

control the transceiver to transmit the data to the third network node.

12. The second network node of claim 11, wherein the at least one processor is further configured to:

control the transceiver to send a second UL grant corresponding to the first SR or the first BSR to the first network node, and

control the transceiver to transmit at least one of the second SR and the second BSR to the third network node based on the second UL grant.

13. The second network node of claim 11, wherein the at least one processor is further configured to:

determining whether to transmit the second BSR based on at least one of the first BSR and an SR configuration of the first SR; and

control the transceiver to transmit the second BSR to the third network node based on the determination.

14. The second network node of claim 13, wherein the at least one processor is further configured to:

determining whether to transmit the second BSR based on a priority of the data to be transmitted.

15. The second network node of claim 14, wherein the at least one processor is further configured to:

determining to transmit the second BSR if the priority of the data to be transmitted is higher than the priority of the data present in the buffer of the second network node.

Technical Field

The present disclosure relates to reduction of scheduling delay in a telecommunication system. And in particular to latency issues in systems implementing Integrated Access and Backhaul (IAB). This is known and used at least in the fifth generation (5G) or new air interface (NR) systems.

Background

In order to meet the increased demand for wireless data traffic since the deployment of fourth generation (4G) communication systems, efforts have been made to develop improved fifth generation (5G) or pre-5G (pre-5G) communication systems. The 5G or pre-5G communication system is also referred to as a "super 4G network" or a "Long Term Evolution (LTE) system". The 5G communication system is considered to be implemented in a higher frequency (mmWave) band (e.g., 60GHz band) in order to achieve a higher data rate. In order to reduce propagation loss of radio waves and increase transmission distance, beamforming, massive Multiple Input Multiple Output (MIMO), full-dimensional MIMO (FD-MIMO), array antenna, analog beamforming, and massive antenna techniques are discussed in terms of a 5G communication system. Further, in the 5G communication system, development of system network improvement is being performed based on advanced small cells, cloud Radio Access Network (RAN), ultra dense network, device-to-device (D2D) communication, wireless backhaul, mobile network, cooperative communication, coordinated multipoint (CoMP), receiver-side interference cancellation, and the like. In 5G systems, hybrid Frequency Shift Keying (FSK) and Feher Quadrature Amplitude Modulation (FQAM) and Sliding Window Superposition Coding (SWSC) have been developed as Advanced Coding Modulation (ACM), and filter bank multi-carrier (FBMC), non-orthogonal multiple access (NOMA), and Sparse Code Multiple Access (SCMA) as advanced access techniques.

Being a human-centric network-connected internet in which humans generate and consume information, the internet of things (IoT) is evolving towards distributed entities, such as things, therein to exchange and process information without human intervention. Internet of everything (IoE) has emerged as a combination of IoT technology and big data processing technology through connection with a cloud server. As IoT implementations require technical elements such as "sensing technology," "wired/wireless communication and network infrastructure," "service interface technology," and "security technology," sensor networks, machine-to-machine (M2M) communication, machine type communication, etc. have recently been investigated. Such IoT environments can provide intelligent internet technology services that create new value for human life by collecting and analyzing data generated between networked things. Through the fusion and combination of existing Information Technology (IT) with various industrial applications, IoT can be applied to various fields including smart homes, smart buildings, smart cities, smart cars or networked cars, smart grids, healthcare, smart appliances, and advanced medical services.

In line with this, various attempts have been made to apply the 5G communication system to the IoT network. For example, technologies such as sensor network, MTC, and M2M communication may be implemented through beamforming, MIMO, and array antennas. The application of cloud RAN as the big data processing technology described above can also be considered as an example of the convergence between 5G technology and IoT technology.

As described above, various services can be provided according to the development of wireless communication systems, and thus a method for easily providing such services is required. For example, there is a need for a method for reducing latency in a system implementing Integrated Access and Backhaul (IAB).

Disclosure of Invention

A second network node in a wireless communication system is provided. The second network node comprises: a transceiver; and at least one processor configured to: receiving at least one of a first Scheduling Request (SR) and a first Buffer Status Report (BSR) from a first network node; transmitting, prior to receiving data to be transmitted corresponding to the first SR or the first BSR from the first network node, at least one of a second SR and a second BSR to a third network node based on the first SR or the first BSR; receiving a first Uplink (UL) grant corresponding to the second SR or the second BSR from a third network node; receiving data from a first network node; and transmitting the data to the third network node.

Drawings

For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which like reference numbers represent like parts:

fig. 1 illustrates a message exchange process involving three nodes in a multi-hop IAB network arrangement in a wireless communication system, according to various embodiments of the present disclosure;

fig. 2 illustrates a message exchange process involving three nodes in a multi-hop IAB network arrangement in accordance with at least one embodiment of the present disclosure;

fig. 3 illustrates a method performed by a second network node according to an embodiment of the present disclosure;

fig. 4 is a block diagram illustrating a UE in accordance with various embodiments of the present disclosure; and

fig. 5 is a block diagram illustrating a network node according to various embodiments of the present disclosure.

Detailed Description

[ best mode ]

In one embodiment, a method performed by a second network node in a wireless communication system is provided. The method comprises the following steps: receiving at least one of a first Scheduling Request (SR) and a first Buffer Status Report (BSR) from a first network node; transmitting, prior to receiving data to be transmitted corresponding to the first SR or the first BSR from the first network node, at least one of a second SR and a second BSR to a third network node based on the first SR or the first BSR; receiving a first Uplink (UL) grant corresponding to the second SR or the second BSR from a third network node; receiving data from a first network node; and transmitting the data to the third network node.

In an embodiment, the method further comprises: transmitting a second UL grant corresponding to the first SR or the first BSR to the first network node, wherein transmitting at least one of the second SR and the second BSR to the third network node comprises: transmitting at least one of a second SR and a second BSR based on a second UL grant.

In an embodiment, the transmitting the second BSR to the third network node comprises: determining whether to transmit a second BSR based on at least one of the first BSR and the SR configuration of the first SR; and transmitting the second BSR based on the determination result.

In an embodiment, determining whether to transmit the second BSR includes: whether to transmit the second BSR is determined based on the priority of data to be transmitted.

In an embodiment, determining whether to transmit the second BSR based on the priority of the data to be transmitted comprises: determining to transmit the second BSR in case the priority of the data to be transmitted is higher than the priority of the data present in the buffer of the second network node.

In an embodiment, determining whether to transmit the second BSR includes: whether to transmit the second BSR is determined based on the number of hops required for the data to reach the data destination.

In an embodiment, the second BSR includes buffer status information of the second network node corresponding to the data to be transmitted.

In an embodiment, the second BSR is allocated a Logical Channel Identifier (LCID) that is different from another LCID allocated to the another BSR, and the another BSR includes buffer status information of the second network node corresponding to data currently existing in the buffer.

In one embodiment, a second network node in a wireless communication system is provided. The second network node comprises: a transceiver and at least one processor, the at least one processor configured to: receiving at least one of a first Scheduling Request (SR) and a first Buffer Status Report (BSR) from a first network node; transmitting, prior to receiving data to be transmitted corresponding to the first SR or the first BSR from the first network node, at least one of a second SR and a second BSR to a third network node based on the first SR or the first BSR; receiving a first Uplink (UL) grant corresponding to the second SR or the second BSR from a third network node; receiving data from a first network node; and transmitting the data to the third network node.

In an embodiment, the at least one processor is further configured to:

sending a second UL grant corresponding to the first SR or the first BSR to the first network node, an

Transmitting at least one of a second SR and a second BSR to a third network node based on a second UL grant.

In an embodiment, the at least one processor is further configured to: determining whether to transmit a second BSR based on at least one of the SR configuration of the first SR and the first BSR; and transmitting the second BSR to the third network node based on the determination result.

In an embodiment, the at least one processor is further configured to: whether to transmit the second BSR is determined based on the priority of data to be transmitted.

In an embodiment, the at least one processor is further configured to: the second BSR is determined to be transmitted in case the priority of the data to be transmitted is higher than the priority of the data present in the buffer of the second network node.

In an embodiment, the at least one processor is further configured to: whether to transmit the second BSR is determined based on the number of hops required for the data to reach the data destination.

In an embodiment, the second BSR includes buffer status information of the second network node corresponding to the data to be transmitted.

In an embodiment, the second BSR is allocated a Logical Channel Identifier (LCID) that is different from another LCID allocated to the another BSR, and the another BSR includes buffer status information of the second network node corresponding to data currently existing in the buffer.

In one embodiment, a method is provided for requesting resources from a node in a multi-node telecommunications system in which a first node sends data to a second node and the second node sends the data to a third node, the method comprising the steps of: the first node indicating to the second node that it has data intended for the second node; in response, the second node determines whether to request resources from the third node; and based in part on the request from the second node, the third node providing the resource to the second node; and the second node transmitting the data received from the first node to the third node.

In an embodiment, the step of the first node indicating to the second node that it has data intended for the second node comprises: the first node transmits a Scheduling Request (SR) message or a Buffer Status Report (BSR) message.

In an embodiment, the step of the second node requesting resources from the third node occurs in response before the second node receives the data sent from the first node.

In an embodiment, the step of the second node requesting resources from the third node in response further comprises: the resource is only requested when one or more additional criteria are met.

In an embodiment, the one or more additional criteria include information from the first node indicating:

-presence of data in a first node of a specific type or belonging to a specific service; or

-there is high priority data; or

-there is data of a certain priority in the node's own buffer relative to the priority of the existing data; or

-the total buffer occupancy at the first node exceeds a certain threshold; or

-the amount of data of a certain type or belonging to a certain service or high priority exceeds a certain threshold; or

There is data that will require a certain number of hops to reach its destination that exceeds a defined threshold; or

There is data whose transmission would require a class of resources that has not yet been configured.

In an embodiment, the information from the first node is included in a Buffer Status Report (BSR).

In an embodiment, the one or more additional criteria include the time at which the second node determines that it authorizes the first node and/or the time until the resources referenced in the authorization are available exceeding a defined threshold.

In an embodiment, the one or more additional criteria include the second node determining that it may not provide resources to the first node within a defined period.

In an embodiment, the one or more additional criteria include use by the first node of a particular configuration on which to send the resource request indicating that the resource request originates, at least in part, from:

-a high priority logical channel; or

-logical channels of a certain priority relative to the priority of the existing data in the node's own buffer; or

-logical channels dedicated to a specific service or having high priority data exceeding a defined threshold; or

-a logical channel with data that would require a defined number of hops exceeding a defined threshold to reach its destination; or

Logical channels with data that would require a class of resources that has not been configured.

In an embodiment, the information from the first node is comprised in a Scheduling Request (SR).

In an embodiment, the step of the second node requesting resources from the third node in response comprises: the buffer status of the first node is forwarded to the third node.

In an embodiment, the step of the second node requesting resources from the third node in response comprises: information is sent to the third node regarding all or a portion of the buffer status of the first node that transitioned to the status of the second node's buffer.

In an embodiment, the full or partial buffer state of the first node transitioning to the state of the second node buffer comprises: the priorities of the logical channels and/or services provided by the logical channels are matched to obtain an aggregate value comprising the actual buffer status of the second node and the expected change in buffer status of the second node.

In an embodiment, the information sent to the third node is any one of:

(a) indicating an expected increase in the buffer status of the second node after receiving the data from the first node; or

(b) Indicating a combination with existing data in the second node buffer.

In an embodiment, in case of option (a), the information comprises an indication thereof relating to the expected increase.

In an embodiment, in the case of option (b), the information comprises an indication thereof relating to the total amount of data.

In an embodiment, the information sent to the third node comprises a deduction in an expected buffer status occupancy based on UL resources that have been granted to the second node.

In an embodiment, when the resource assigned to the second node is filled, including an expected increase in buffer status or a combination with existing data in the second node buffer, the size of the filling is equal to or greater than the expected buffer status or the combination with existing data in the second node buffer plus the size of any associated sub-headers (subheaders) or control elements.

In an embodiment, the fill size is smaller than the total size of the actual buffer status and the expected increase or combination with existing data in the second node buffer, thereby determining which information to send based on the priority order.

In an embodiment, the prioritization based determination is made on the basis of information on the existing buffer status that has been sent last time.

In an embodiment, there is provided an apparatus arranged to perform at least one of the above methods.

In an embodiment, at least one base station operable to perform at least one of the above methods is provided.

While several embodiments of the present disclosure have been shown and described, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the scope of the disclosure as defined in the following claims.

For a better understanding of the present disclosure, and to show how embodiments thereof may be carried into effect, reference will now be made, by way of example only, to the accompanying drawings.

[ modes for the invention ]

Before proceeding with the following detailed description, it may be advantageous to set forth definitions of certain words and phrases used throughout this patent document: the terms "include" and "comprise," as well as derivatives thereof, mean inclusion without limitation; the term "or" is inclusive, meaning and/or; the phrases "associated with …" and "associated therewith," as well as derivatives thereof, may mean including, included within …, interconnected with …, contained within …, connected to or with …, coupled to or with …, communicable with …, cooperative with …, interleaved, juxtaposed, proximate, bound to or with …, having a property of …, and the like; and the term "controller" means any device, system or part thereof that controls at least one operation, such a device may be implemented in hardware, firmware or software, or some combination of at least two of the same. It should be noted that the functionality associated with any particular controller may be centralized or distributed, whether locally or remotely.

Further, the various functions described below may be implemented or supported by one or more computer programs, each of which is formed from computer-readable program code and embodied in a computer-readable medium. The terms "application" and "program" refer to one or more computer programs, software components, sets of instructions, procedures, functions, objects, classes, instances, related data, or a portion thereof adapted for implementation in suitable computer-readable program code. The phrase "computer readable program code" includes any type of computer code, including source code, object code, and executable code. The phrase "computer readable medium" includes any type of medium capable of being accessed by a computer, such as Read Only Memory (ROM), Random Access Memory (RAM), a hard disk drive, a Compact Disc (CD), a Digital Video Disc (DVD), or any other type of memory. A "non-transitory" computer-readable medium excludes wired, wireless, optical, or other communication links that transmit transitory electrical or other signals. Non-transitory computer-readable media include media that can permanently store data as well as media in which data can be stored and subsequently overwritten, such as rewritable optical disks or erasable storage devices.

Definitions for certain words and phrases are provided throughout this patent document. Those of ordinary skill in the art should understand that in the context of many, if not most instances, such definitions apply to prior, as well as future uses of such defined words and phrases.

The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of various embodiments of the disclosure as defined by the claims and their equivalents. It includes various specific details to assist in this understanding, but these details are to be considered illustrative only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the various embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions may be omitted for clarity and conciseness.

The terms and words used in the following description and claims are not limited to the book-interpreted meanings, but are used only by the inventor to enable a clear and consistent understanding of the disclosure. Accordingly, it will be apparent to those skilled in the art that the following descriptions of the various embodiments of the present disclosure are provided for illustration only and not for the purpose of limiting the disclosure as defined by the appended claims and their equivalents.

It should be understood that the singular forms "a," "an," and "the" include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to a "component surface" includes reference to one or more such surfaces.

While the embodiments are described, technical contents that are well known in the related art and are not directly related to the present disclosure will not be provided. By omitting redundant description, the essence of the present disclosure will not be confused and can be clearly explained.

For the same reason, components may be exaggerated, omitted, or schematically shown in the drawings for clarity. Further, the size of each component cannot completely reflect the actual size. In the drawings, like numbering represents like elements.

As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items. Expressions such as "at least one" when referring to a list of preceding elements, modify the list of elements as a whole rather than modifying individual elements of the list. Throughout this disclosure, the expression "at least one of a, b, and c" means all of only a, only b, only c, both a and b, both a and c, both b and c, a, b, and c, or a variation thereof. Advantages and features of one or more embodiments of the present disclosure and methods of accomplishing the same may be understood more readily by reference to the following detailed description of embodiments and the accompanying drawings. In this regard, the present embodiments may have different forms and should not be construed as limited to the descriptions set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the embodiments to those skilled in the art.

Here, it will be understood that a combination of blocks in the flowchart or process flow diagrams can be executed by computer program instructions. Since these computer program instructions may be loaded onto a processor of a general purpose computer, special purpose computer, or another programmable data processing apparatus, the instructions that execute via the processor of the computer or another programmable data processing apparatus create means for implementing the functions specified in the flowchart block or blocks. The computer program instructions may be stored in a computer usable or computer-readable memory that can direct a computer or another programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer usable or computer-readable memory may also be capable of generating an article of manufacture including instruction means for implementing the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or another programmable data processing apparatus to cause a series of operations to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide operations for implementing the functions specified in the flowchart block or blocks.

Further, each block may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.

Here, the term "unit" in the embodiments of the present disclosure means a software component or a hardware component, such as a Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC), and performs a specific function. However, the term "unit" is not limited to software or hardware. A "unit" may be formed in an addressable storage medium or may be formed to operate one or more processors. Thus, for example, the term "unit" may refer to components such as software components, object-oriented software components, class components and task components, and may include procedures, functions, attributes, programs, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, or variables. The functionality provided by the components and "units" may be associated with a smaller number of components and "units" or may be divided into additional components and "units". Further, the components and "units" may be implemented as one or more Central Processing Units (CPUs) that are replicated in a device or secure multimedia card. Additionally, in an embodiment, a "unit" may comprise at least one processor. In this disclosure, a controller may also be referred to as a processor.

Wireless communication systems have evolved from providing initial voice-oriented services to, for example, broadband wireless communication systems that provide high-speed and high-quality packet data services, such as the high-speed packet access (HSPA) of 3GPP, the Long Term Evolution (LTE) or evolved universal terrestrial radio access (E-UTRA) and LTE-advanced (LTE-a) communication standards, the high-rate packet data (HRPD) and Ultra Mobile Broadband (UMB) communication standards of 3GPP2, and the IEEE 802.16E communication standards. Fifth generation (5G) or new air interface (NR) communication standards are being developed with 5G wireless communication systems.

Hereinafter, one or more embodiments will be described with reference to the accompanying drawings. Also, in the description of the present disclosure, some detailed explanations of related functions or configurations are omitted when it is considered that the substance of the present disclosure may be unnecessarily obscured. All terms used herein including descriptive or technical terms should be interpreted as having a meaning that is obvious to one of ordinary skill in the art. However, these terms may have different meanings according to the intention of a person having ordinary skill in the art, precedent cases, or appearance of new technology, and thus, the terms used herein must be defined based on the meanings of these terms and the description throughout the specification. Hereinafter, the base station may be a main body that performs resource allocation of the terminal, and may be at least one of a eNode B, an eNode B, a node B (node B), a Base Station (BS), a radio access unit, a base station controller, and a node on a network. A terminal may include a User Equipment (UE), a Mobile Station (MS), a cellular phone, a smart phone, a computer, a multimedia system capable of performing communication functions, and the like. In the present disclosure, DL is a wireless transmission path of a signal transmitted from a base station to a terminal, and UL is a wireless transmission path of a signal transmitted from a terminal to a base station. Throughout this specification, a layer (or layer means) may also be referred to as an entity. Also, hereinafter, one or more embodiments of the present disclosure will be described as an example of an LTE or LTE-a system, but the one or more embodiments may also be applied to other communication systems having similar technical background or channel form. For example, 5G mobile communication technology (5G, new air interface, NR) developed after LTE-a may be included. Further, according to one skilled in the art, one or more embodiments may be applied to other communication systems with some modifications within the scope of the present disclosure without departing from the scope of the present disclosure

In an LTE system, which is a representative example of a broadband wireless communication system, an Orthogonal Frequency Division Multiplexing (OFDM) scheme is used in DL, and a single carrier frequency division multiplexing (SC-FDMA) scheme is used in UL. The UL refers to a radio link through which a terminal, UE, or MS transmits data or control signals to a BS or a enode B, and the DL refers to a radio link through which a BS transmits data or control signals to a terminal. In such a multiple access scheme, data or control information of each user is classified by generally allocating and operating the data or control information such that time-frequency resources for transmitting the data or control information of each user do not overlap each other, i.e., such that orthogonality is established.

Terms such as physical channels and signals in existing LTE or LTE-a systems may be used to describe the methods and apparatuses suggested in the present disclosure. However, the disclosure is applied to a wireless communication system, not necessarily an LTE or LTE-a system.

IAB nodes, at least conceptually, feature a base station part or a Distribution Unit (DU) and a Mobile Telephone (MT) part. The MT part can currently request Uplink (UL) resources for UL data transmission only after it actually receives the data to be transmitted from its child nodes, although the MT part already has knowledge of the incoming data. In a multihop network, any such delay delays may accumulate due to the number of hops and the amount of aggregation of data at the IAB node. This is illustrated in fig. 1, which shows a worst scenario where neither IAB node a or node B has any UL resources currently assigned to them. In this disclosure, a network node may be referred to simply as a node.

Figures 1 through 5, discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged system or device.

Fig. 1 illustrates a message exchange process involving three nodes in a multi-hop IAB network arrangement in a wireless communication system, according to various embodiments of the present disclosure.

In fig. 1, a typical telecommunications network using several IAB nodes in a multi-hop configuration is shown. Data is transferred from node a (the first node) to node B (the second node) to node C (the third node) and then on. Various embodiments of the present disclosure recognize that when node a (child) needs to send data to node B (parent), it sends a message (1-10) that includes a Scheduling Request (SR). The node B responds with a UL grant message (1-20) allocating some capacity to node a. Node a then sends a Buffer Status Report (BSR) message (1-30). This indicates to the node B the amount of data it needs to transmit. Node B responds with other UL grant messages (1-40) allocating appropriate capacity to node a to send node a's data (in messages 1-50).

If node a already has available capacity to signal to node B that a large amount of data needs to be sent, steps 1-10 and 1-20 may be omitted and node a may be able to send its BSR message directly (1-30).

Once node B receives the data at its DU, its MT sends a new scheduling request message to node C (1-60). Node C responds with a UL grant message (1-70) and the same pattern of steps already set forth is repeated between node B and node C (i.e., the same as 1-30 to 1-50).

Importantly, in this arrangement, node B cannot request resources from node C until it has received data (1-50). This dependency is indicated by the dashed arrow 1-1.

Various embodiments of the present disclosure provide a solution to the problem, such as those set forth above.

Fig. 2 illustrates at least one embodiment of the present disclosure by an exchange of messages including node a, node B, and node C.

In one embodiment, node A (the first node) sends an SR message (2-10) and node B (the second node) responds with an UL grant message (2-20). Node a then sends a BSR message (2-30). Upon receiving the BSR message (2-30), node B knows the quality and quantity of data destined for continued transmission to node C (the third node), and therefore sends an SR message to node C (2-100). Node C responds with a UL grant message (2-110). Meanwhile, in response to the UL grant message (2-40) from the node B, the node A sends its data (2-50) to the node B.

Once node B receives the UL grant message from node C (2-110), node B sends a new BSR message to node C based on the information in node a's BSR message that it has received from node a (2-120). Node C responds with a UL grant (2-130) and node B then sends data to node C (2-140). Note that in some cases, node C may not be eligible for requests from node B, e.g., node C supplies resources on its own, or when node C is unable to do so due to capacity or other operational reasons. However, in case node B requests a resource, node C will take the request into account and decide whether to provide the requested resource.

In this case, the dashed arrow 2-2 represents a link between the node B learning from node a details of the data to be forwarded to node C and the node B subsequently taking action.

Dashed arrows 2-4 indicate data flowing from node B to node C. Note that this is earlier than shown in fig. 1, where data from node a must be received at node B before any request is made to forward it to node C. In this embodiment, the node B is able to preempt the demand for requested capacity, as shown, so some activities occur in parallel.

By using the embodiments set forth above, latency may be reduced and an overall increase in system performance experienced. However, performance may be further improved by other triggering mechanisms to preempt the demand for requested capacity.

However, when node B knows that the data is coming from node a, requesting UL resources from node C in each case may not be desirable for node B. Doing so may be wasteful in terms of the limited resources available. Thus, in embodiments of the present disclosure, requesting resources from node C via SR (2-100) or new BSR (2-120) is not automatic, but is triggered when one or more of the following conditions are met:

-the BSR (2-30) from node a indicates the presence of data in a child node (node a) of a particular type/belonging to a particular service;

-the BSR (2-30) from node a indicates the presence of high priority data;

-the BSR (2-30) from node a indicates the presence of data of a certain priority relative to the priority of the existing data in the node's own buffer; or

The BSR from node a (2-30) indicates that the total buffer occupancy at the child node exceeds a certain threshold (which may be configurable);

-the BSR (2-30) from node a indicates that the amount of data of a certain type/belonging to a certain service/having a high priority exceeds a certain threshold;

the BSR (2-30) from node a indicates that there is data that will require a certain number of hops to reach its destination that exceeds some threshold (configurable);

the BSR (2-30) from node a indicates the presence of data whose transmission would require a type of resources (e.g. different types of carriers/bandwidth parts/parameter sets) that have not been configured.

In a further embodiment, a new BSR from node B to node C is triggered only if the time when node B determines its child node's grant and/or the time until the resources referenced in the grant are available exceeds a certain threshold (2-120).

In a further embodiment, when node B determines that it may not give its child nodes resources within a reasonable (configurable) time window, then a new BSR from node B to node C is not triggered (2-120).

The embodiments described so far relate to a new BSR triggered based on the BSR (2-30) received from the child node (node A). Embodiments of the present disclosure also relate to a new BSR triggered based on the SR (2-10) received from the child node. In this way, earlier requests can be made. This is indicated by the dashed arrows 2-3 in fig. 2. Many of the examples cited above are equally applicable here.

However, as previously mentioned, when node B knows that the data is coming from node a, requesting UL resources from node C in each case may not be desirable for node B. Doing so may be wasteful in terms of the limited resources available. Thus, in another embodiment, the SR (2-100) or new BSR (2-120) is triggered only when one or more of the following conditions are met and an SR (2-10) is received from the child node (node A):

SR configuration (for transmission of SR) as a high priority logical channel;

-SR configuring a logical channel configured to a certain priority relative to the existing data priority in the node's own buffer; or

-SR configuring a logical channel configured to be dedicated to a specific service/data with high priority exceeding a certain threshold;

SR configures a logical channel configured with data that will require a certain number of hops to reach its destination that exceeds a certain threshold (configurable);

SR configures logical channels configured with data that would require a class of resources (e.g., different types of carriers/bandwidth parts/parameter sets) that have not been configured.

In this case, the SR configuration refers to a set of resources allowing a node/terminal to transmit an SR, and more particularly, a set of PUCCH resources. In NR, there are multiple such sets (called configurations), and which one to use indicates certain properties of the channel that triggers the SR.

In an embodiment, a new BSR (2-120) is generated and transmitted when UL resources are assigned to the node B and the number of padding bits is equal to or greater than the size of the BSR (2-120) according to its sub-header. When checking the padding size, the existing BSR (2-30) may take precedence over the new BSR (2-120). Alternatively, the new BSR (2-120) may take precedence over the existing BSR (2-30). This decision may be made based on the existing BSR (2-30) that has been sent the last time. This will also depend on whether the new BSR (2-120) includes an existing BSR, see below.

Embodiments of the present disclosure additionally relate to the format of the new BSR (2-120) and its content.

For any of the above embodiments, the BSR (2-120) in question only reports the total amount of data in its buffer of child node (node a). In a further embodiment, the BSR (2-120) reports all (or part) of the buffer status of only the child node (node a) buffer that transitions to reporting the status of its own (node B) buffer, in other words this means that if a padding BSR is sent, for example by node a when data is to be received as reported in the BSR, node B will calculate the expected change in occupancy of its own logical signal group (LCG) or a subset thereof.

In a further embodiment, the calculation takes into account any existing grants that node C has given to node B, and deducts any reduction in buffer status occupancy that is expected based on available UL resources. Furthermore, the new BSR (2-120) uses a different format (e.g., indicated by flag/reserved bits/LCID), explicitly indicating that this is the "expected data BSR". In case the new BSR indicates cumulative occupancy (current data + expected data), this may be sent as two separate BSRs.

In an embodiment, according to the foregoing details, three different types of buffer states are defined:

1. current data only (existing BSR)

2. Current data + expected data (A-type New BSR)

3. Only expected data (B-type New BSR)

Embodiments of the present disclosure use a hybrid operation of option 1 and at least one of option 2 and option 3.

In another embodiment, the new BSR (2-120) includes additional data on top of the buffer occupancy data, including one or more of:

-the time of reception of the child node BSR (2-30);

-a time at which data (2-50) from a child node is expected to be received;

-the time of reception of the child node SR (2-10);

time when the node B expects to give UL grant (2-40) to node a.

In another embodiment, node C may be configured to: reporting a new BSR with a certain periodicity (2-120), and/or reporting a new BSR if the node B is polled (2-120). Node C may also refrain from reporting a new BSR for a certain period of time (2-120).

The embodiments described so far have focused on a node B having only one parent node (node C). Embodiments are also related to the case where node B has multiple parents. This includes, but is not limited to, the case of dual connectivity. In this case, embodiments of the present disclosure also relate to:

-the new BSR (2-120) is sent to only one of the parent nodes (e.g. the primary node, or the node from which most of the authorizations are expected based on past history, or the node from which most of the authorizations are expected based on known destination addresses of past packets from the child node, or the node from which most of the authorizations are expected based on known IDs of configured DRBs from the child node, or the node B is biased towards the node from which authorizations are derived);

-the new BSR (2-120) is sent with the same copy to a subset of the parent nodes or to all parent nodes;

-new BSR (2-120) is sent to a subset of parent nodes or all parent nodes, but the reported expected data occupancy is divided into multiple reports according to some configurable threshold.

As can be seen from the foregoing, embodiments of the present disclosure permit early provision of resources in a network comprising multiple IAB nodes, thereby reducing latency. Furthermore, by using a secondary trigger, the use of limited resources may not be optimized by assigning capacity in the network, unless certain criteria are met, as defined in the foregoing.

At least some of the example embodiments described herein may be constructed, in part or in whole, using dedicated special purpose hardware. Terms such as "component," "module," or "unit" as used herein may include, but are not limited to, a hardware device, such as a circuit in the form of a discrete or integrated component, a Field Programmable Gate Array (FPGA), or an Application Specific Integrated Circuit (ASIC), that performs certain tasks or provides associated functions. In some embodiments, the elements may be configured to reside on a tangible, persistent, addressable storage medium and may be configured to execute on one or more processors. These functional elements may include, for example, components (such as software components, object-oriented software components, class components, and task components), procedures, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. Although example embodiments have been described with reference to components, modules, and units discussed herein, such functional elements may be combined into fewer elements or separated into additional elements. Various combinations of optional features have been described herein, and it will be understood that the features may be combined in any suitable combination. In particular, features of any one example embodiment may be combined with features of any other embodiment as appropriate, unless such combinations are mutually exclusive. Throughout this specification, the term "comprising" or "comprises" is intended to mean that the specified elements are included, but not to preclude the presence of other elements.

Fig. 3 illustrates a method performed by a second network node according to an embodiment of the present disclosure.

Referring to fig. 3, in operation 310, the second network node may receive at least one of a first Scheduling Request (SR) and a first Buffer Status Report (BSR) from the first network node.

In operation 320, the second network node may transmit at least one of a second SR and a second BSR to the third network node based on the first SR or the first BSR before receiving data to be transmitted corresponding to the first SR or the first BSR from the first network node.

In operation 330, the second network node may receive a first Uplink (UL) grant corresponding to the second SR or the second BSR from a third network node.

In operation 340, the second network node may receive data from the first network node; and

in operation 350, the second network node may transmit data to the third network node.

According to the method described in fig. 3, the latency delay may be reduced by requesting resources for the data from the third network node before the second network node receives the data from the first network node.

Fig. 4 is a block diagram illustrating a UE according to an embodiment of the present disclosure.

Fig. 4 schematically shows a User Equipment (UE) according to an embodiment of the present disclosure.

The UE described above may correspond to the UE of fig. 4.

Referring to fig. 4, the UE may include a processor 4-05, a transceiver 4-10, and a memory 4-15. However, all of the illustrated components are not required. The UE may be implemented with more or fewer components than shown in fig. 4. Furthermore, according to another embodiment, the processor 4-05 as well as the transceiver 4-10 and the memory 4-15 may be implemented as a single chip.

The aforementioned components will now be described in detail.

The processors 4-05 may include one or more processors or other processing devices that control the proposed functions, processes, and/or methods. The operation of the UE may be implemented by the processor 4-05.

The processor 4-05 may detect PDCCH on the configured set of control resources. The processor 4-05 determines a method for dividing CBs and a method for PDSCH rate matching from the PDCCH. The processor 4-05 may control the transceiver 4-10 to receive the PDSCH according to the PDCCH. The processor 4-05 may generate HARQ-ACK information from the PDSCH. The processor 4-05 may control the transceiver 4-10 to send HARQ-ACK information.

The transceivers 4-10 may include an RF transmitter for up-converting and amplifying the transmitted signals, and an RF receiver for down-converting the frequency of the received signals. However, according to another embodiment, the transceivers 4-10 may be implemented with more or fewer components than those shown.

The transceiver 4-10 may be connected to the processor 4-05 and transmit and/or receive signals. The signals may include control information and data. In addition, the transceiver 4-10 may receive signals over a wireless channel and output the signals to the processor 4-05. The transceiver 4-10 may transmit the signal output from the processor 4-05 through a wireless channel.

The memories 4-15 may store control information or data included in signals obtained by the UE. The memory 4-15 may be connected to the processor 4-05 and store at least one instruction or protocol or parameter for the proposed function, procedure and/or method. The memories 4-15 may comprise read-only memory (ROM) and/or random-access memory (RAM) and/or a hard disk and/or CD-ROM and/or DVD and/or other memory devices.

Fig. 5 is a block diagram illustrating a network node according to an embodiment of the present disclosure.

The network entities, e.g., nodes, network nodes, base stations, enbs, gnbs, network functions, and any other network entities described above may correspond to the network nodes of fig. 5.

Referring to fig. 5, a network node may include a processor 5-05, a transceiver 5-10, and a memory 5-15. However, all of the illustrated components are not required. The network node may be implemented by more or fewer components than shown in fig. 5. Furthermore, according to another embodiment, the processor 5-05 and the transceiver 5-10 and the memory 5-15 may be implemented as a single chip.

The aforementioned components will now be described in detail.

The processors 5-05 may include one or more processors or other processing devices that control the proposed functions, processes, and/or methods. The operation of the network node may be implemented by the processor 5-05.

The transceivers 5-10 may include an RF transmitter for up-converting and amplifying the transmitted signals, and an RF receiver for down-converting the frequency of the received signals. However, according to another embodiment, the transceivers 5-10 may be implemented by more or fewer components than shown in the components.

The transceiver 5-10 may be connected to the processor 5-05 and transmit and/or receive signals. The signals may include control information and data. Further, the transceiver 5-10 may receive a signal through a wireless channel and output the signal to the processor 5-05. The transceiver 5-10 may transmit a signal output from the processor 5-05 through a wireless channel.

The memories 5-15 may store control information or data comprised in signals obtained by the network node. The memory 5-15 may be connected to the processor 5-05 and store at least one instruction or protocol or parameter for the proposed function, procedure and/or method. The memories 5-15 may comprise read-only memory (ROM) and/or random-access memory (RAM) and/or a hard disk and/or CD-ROM and/or DVD and/or other memory devices.

Attention is directed to all papers and documents which are filed concurrently with or previous to this specification in connection with this application and which are open to public inspection with this specification, and the contents of all such papers and documents are incorporated herein by reference.

All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive.

Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.

The disclosure is not limited to the details of the foregoing embodiments. The disclosure extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed.

Those skilled in the art will appreciate that the steps performed in whole or in part by the above-described method embodiments may be implemented by hardware associated with a program, which may be stored on a computer-readable storage medium, that when executed, comprises one or a combination of the steps of the method embodiments.

Furthermore, the functional units in the various embodiments of the present application may be integrated in a processing module, or each unit may be physically present separately, or two or more units may be integrated in one module. The integration module may be implemented in the form of hardware, and may also be implemented in the form of a software functional module. If the integrated module is implemented in the form of a software functional module and sold or used as a separate product, the integrated module may also be stored in a computer-readable storage medium.

While the present disclosure has been described with various embodiments, various changes and modifications may be suggested to one skilled in the art. It is intended that the present disclosure encompass such changes and modifications as fall within the scope of the appended claims.

None of the description in this application should be read as implying that any particular element, step, or function is an essential element which must be included in the claim scope. The scope of patented subject matter is defined only by the claims. Furthermore, none of the claims are intended to invoke a "functional limitations claim" (e.g., 35u.s.c. § 112(f)), unless the exact expression "means for …" is followed by a participle.

While the present disclosure has been described with various embodiments, various changes and modifications may be suggested to one skilled in the art. It is intended that the present disclosure encompass such changes and modifications as fall within the scope of the appended claims.

20页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:侧链组播通信技术的装置及方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!