Electronic device and method for data aggregation in 5G user equipment

文档序号:1909881 发布日期:2021-11-30 浏览:32次 中文

阅读说明:本技术 用于在5g用户设备中进行数据聚合的电子装置和方法 (Electronic device and method for data aggregation in 5G user equipment ) 是由 Q·李 M·F·斯塔西尼克 李鸿堃 P·M·埃德贾克普勒 C·M·姆拉丁 J·M·默里 于 2020-04-28 设计创作,主要内容包括:一种电子装置,包括电路,所述电路被配置成向核心网络发送请求,其中所述请求向核心网络通知所述电子装置能够聚合并中继来自其他电子装置的数据;从核心网络接收一个或多个聚合策略,其中所述一个或多个聚合策略提供指令所述电子装置如何聚合和中继数据的参数;从其他电子装置接收数据;基于所述参数聚合来自其他电子装置的数据;以及将聚合数据发送到核心网络。(An electronic device comprising circuitry configured to send a request to a core network, wherein the request informs the core network that the electronic device is capable of aggregating and relaying data from other electronic devices; receiving one or more aggregation policies from a core network, wherein the one or more aggregation policies provide parameters that instruct the electronic device how to aggregate and relay data; receiving data from other electronic devices; aggregating data from other electronic devices based on the parameters; and sending the aggregated data to the core network.)

1. An electronic device, comprising:

a circuit configured to

Sending a request to a core network, wherein the request informs the core network that the electronic device is capable of aggregating and relaying data from other electronic devices;

receiving one or more aggregation policies from a core network, wherein the one or more aggregation policies provide parameters that instruct the electronic device how to aggregate and relay data;

receiving data from other electronic devices;

aggregating data from other electronic devices based on the parameters; and

the aggregated data is sent to a core network.

2. The electronic device of claim 1, wherein the circuitry of the electronic device is configured to establish a shared Protocol Data Unit (PDU) session enabling the electronic device to send aggregated data to a core network; and

a response is received from the core network that the shared PDU session has been created.

3. The electronic device of claim 1, wherein the one or more aggregation policies comprise at least a traffic group ID parameter and a parameter indicating a delay tolerant data period.

4. The electronic device of claim 3, wherein the data received from one of the other electronic devices includes a traffic group ID, and the electronic device is to aggregate the received data if the traffic group ID matches the traffic group ID parameters of the one or more aggregation policies.

5. The electronic device of claim 3, wherein the data received from one of the other electronic devices includes a traffic group ID and a PDU session ID.

6. The electronic device of claim 3, wherein the circuitry of the electronic device is configured to set an amount of time to wait for data from the other electronic device based on the delay tolerant data period.

7. The electronic device of claim 6, wherein a non-access stratum (NAS) layer of the electronic device sets an amount of time to wait for data from other electronic devices.

8. The electronic device of claim 2, wherein the circuitry of the electronic device is configured to establish the shared PDU session with a primary PDU session ID and one or more secondary PDU session IDs.

9. The electronic device of claim 1, wherein circuitry of the electronic device is configured to inform a Radio Access Network (RAN) node that delay tolerant data will be sent to the RAN node, such that the RAN node can optimize allocation of radio resources.

10. The electronic device of claim 9, wherein circuitry of the electronic device is configured to receive a page from the RAN node to inform the electronic device that the RAN node is ready to receive aggregated data.

11. The electronic device of claim 1, wherein the circuitry of the electronic device is configured to relay the aggregated data to the core network through the RAN node and a relay aggregation user plane function, RA UPF.

12. The electronic device of claim 11, wherein the RA UPF receives the aggregated data, separates the aggregated data, and forwards respective ones of the separated aggregated data to different PDU session anchor Points (PSA) UPFs according to respective PDU session IDs provided with the data from each of the other electronic devices.

13. The electronic device of claim 1, wherein the circuitry of the electronic device is configured to coordinate with other electronic devices to synchronize delay budgets for transmitting data to optimize battery consumption of the electronic device and the other electronic devices.

14. The electronic device according to claim 1, wherein the electronic device is a user equipment functioning as a relay node device.

15. The electronic device of claim 1, wherein the electronic device is configured to receive data from other electronic devices via PC5 communication.

16. The electronic device of claim 9, wherein the electronic device is configured to receive a broadcast from the RAN node indicating support for receiving delay tolerant data and paging the electronic device when ready to receive delay tolerant data.

17. The electronic device of claim 1, wherein the request may include a relay aggregation indication or a data aggregation indication.

18. A method performed by an electronic device, the method comprising:

sending a request to a core network, wherein the request informs the core network that the electronic device is capable of aggregating and relaying data from other electronic devices;

receiving one or more aggregation policies from a core network, wherein the one or more aggregation policies provide parameters that instruct the electronic device how to aggregate and relay data;

receiving data from other electronic devices;

aggregating data from other electronic devices based on the parameters; and

the aggregated data is sent to a core network.

19. A method for a network node, the method comprising:

receiving a request from an electronic device, wherein the request informs the network node that the electronic device is capable of aggregating and relaying data from other electronic devices; and

sending one or more aggregation policies to the electronic device, wherein the one or more aggregation policies provide parameters that instruct the electronic device how to aggregate and relay data,

wherein the one or more aggregation policies comprise at least a traffic group ID parameter and a parameter indicating a delay tolerant data period.

20. The method as recited in claim 19, further comprising:

receiving aggregated data, wherein the received aggregated data is data that has been aggregated from other electronic devices based on the provided parameters.

Technical Field

The present disclosure relates generally to wireless communications, and more particularly to wireless communication systems, apparatuses, methods, and computer-readable media having computer-executable instructions for data aggregation in 5G User Equipment (UE).

Background

The "background" description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure.

The UE may be used to track assets such as shipping containers, and tracking users the UE requires the UE to operate worldwide and be reachable throughout the transit to provide tracking and management of the subject asset. These UEs can be attached to containers and have an expected service life of 10-15 years. The UE may be a device with a fixed battery, so that efficient power consumption is a critical requirement. Due to mobility and signal strength differences, the container (containing the UE) may not always have connectivity to the core network, and the container will no longer be able to be tracked. In addition, since the conventional UE for tracking the container is powered on for a long time while transmitting or receiving data or waiting for transmitting or receiving data, the conventional UE consumes a large amount of power.

Disclosure of Invention

Example embodiments of the present disclosure provide an electronic device. The electronic device comprises circuitry configured to send a request to a core network, wherein the request informs the core network that the electronic device is capable of aggregating and relaying data from other electronic devices; receiving one or more aggregation policies from a core network, wherein the one or more aggregation policies provide parameters that instruct the electronic device how to aggregate and relay data; receiving data from other electronic devices; aggregating data from other electronic devices based on the parameters; and sending the aggregated data to the core network.

Example embodiments of the present disclosure provide a method performed by an electronic device. The method comprises the following steps: sending a request to a core network, wherein the request informs the core network that the electronic device is capable of aggregating and relaying data from other electronic devices; receiving one or more aggregation policies from a core network, wherein the one or more aggregation policies provide parameters that instruct the electronic device how to aggregate and relay data; receiving data from other electronic devices; aggregating data from other electronic devices based on the parameters; and sending the aggregated data to the core network.

An exemplary embodiment of the present disclosure provides a method for a network node, the method comprising: receiving a request from an electronic device, wherein the request informs the network node that the electronic device is capable of aggregating and relaying data from other electronic devices; and transmitting one or more aggregation policies to the electronic device, wherein the one or more aggregation policies provide parameters instructing the electronic device how to aggregate and relay data, wherein the one or more aggregation policies include at least a traffic group ID parameter and a parameter indicating a delay tolerant data period.

This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to addressing any or all of the disadvantages noted in any part of this disclosure.

Drawings

The scope of the present disclosure may be better understood from the following detailed description of exemplary embodiments when read in conjunction with the accompanying drawings, in which:

FIG. 1A is a system diagram representing an exemplary 3GPP architecture;

FIG. 1B is a system diagram representing an example of a Radio Access Network (RAN) architecture and a core network architecture;

fig. 1C is a system diagram representing an example of a Radio Access Network (RAN) architecture and a core network architecture;

fig. 1D is a system diagram representing an example of a Radio Access Network (RAN) architecture and a core network architecture;

FIG. 1E is a system diagram representing an exemplary 3GPP architecture;

FIG. 1F is a system diagram of an example device or apparatus configured for wireless communication;

FIG. 1G is a system diagram representing an example of a computing system for use in a communication network;

FIG. 2 shows a non-roaming 5G system architecture in accordance with an illustrative embodiment;

fig. 3A shows a UE requested PDU session setup procedure in accordance with an illustrative embodiment;

FIG. 3B shows a UE requested PDU session establishment procedure in accordance with an illustrative embodiment;

fig. 4 shows a user plane architecture of an uplink classifier in accordance with an illustrative embodiment;

FIG. 5 shows a multi-homing PDU session in accordance with an illustrative embodiment;

FIG. 6A shows a change in PDU session anchor points with IPv6 multi-homing PDU sessions in accordance with an illustrative embodiment;

FIG. 6B shows a change of PDU session anchor points with IPv6 multi-homing PDU sessions in accordance with an illustrative embodiment;

FIG. 6C shows a change of PDU session anchor points with IPv6 multi-homing PDU sessions in accordance with the illustrative embodiments;

FIG. 7 shows the addition of additional PDU session anchor points and branch points in accordance with an illustrative embodiment;

FIG. 8 shows the removal of additional PDU session anchor points and branch points in accordance with an illustrative embodiment;

FIG. 9 shows a diagram of an IAB architecture in accordance with an illustrative embodiment;

FIG. 10 shows an architecture in accordance with an illustrative embodiment;

FIG. 11A shows an example protocol stack of the adaptation layer;

FIG. 11B shows an example protocol stack of the adaptation layer;

FIG. 11C shows an example protocol stack of the adaptation layer;

FIG. 11D shows an example protocol stack of the adaptation layer;

FIG. 11E shows an example protocol stack of the adaptation layer;

FIG. 12 shows a container tracking use case in accordance with an illustrative embodiment;

figure 13 shows a user plane protocol stack in accordance with an illustrative embodiment;

fig. 14A shows a UE registration procedure in accordance with an illustrative embodiment;

fig. 14B shows a UE registration procedure in accordance with an illustrative embodiment;

fig. 14C shows a UE registration procedure in accordance with an illustrative embodiment;

fig. 15 shows a UE triggered V2X policy provisioning procedure according to an exemplary embodiment;

fig. 16 shows synchronizing delay tolerant data periods between UEs in a relay link in accordance with an exemplary embodiment;

fig. 17 shows an uplink UE relay data aggregation operation in accordance with an illustrative embodiment;

fig. 18 shows a downlink UPF relay data aggregation operation in accordance with an illustrative embodiment;

fig. 19 shows a RAN node paging a UE for delay tolerant data in accordance with an illustrative embodiment; and

fig. 20 shows a user interface displaying relay aggregation options in a UE, according to an illustrative embodiment.

Further areas of applicability of the present disclosure will become apparent from the detailed description provided hereinafter. It should be understood that the detailed description of the exemplary embodiments is for illustration only and is not intended to limit the scope of the disclosure.

Detailed Description

The third generation partnership project (3GPP) has developed technical standards for cellular telecommunications network technology including radio access, core transport network, and service capabilities-including work on codecs, security, and quality of service. Recent Radio Access Technology (RAT) standards include WCDMA (commonly referred to as 3G), LTE (commonly referred to as 4G), LTE-Advanced standards, and New Radio (NR) also referred to as "5G". The development of the 3GPP NR standard is expected to continue and include the definition of the next generation radio access technology (new RAT), and the provision of new flexible radio access below 7GHz, and new ultra mobile broadband radio access above 7 GHz. Flexible radio access is expected to consist of new non-backward compatible radio access in a new spectrum below 7GHz and is expected to include different modes of operation that can be multiplexed together in the same spectrum to address a wide set of 3GPP NR use cases with different requirements. It is expected that ultra mobile broadband includes cmWave and mmWave spectrum, which will provide opportunities for ultra mobile broadband access for e.g. indoor applications and hotspots. In particular, it is expected that ultra mobile broadband will share a common design framework with flexible radio access below 7GHz, with design optimizations specific to cmWave and mmWave.

The 3GPP has determined various use cases that NR support is expected to result in a wide variety of user experience requirements for data rate, latency and mobility. Use cases include the following general categories: enhanced mobile broadband (eMBB) ultra-reliable low-latency communications (URLLC), large-scale machine type communications (mtc), network operations (e.g., network slicing, routing, migration and interworking, energy conservation), and enhanced vehicle-to-everything (eV2X) communications, the eV2X communications may include any of vehicle-to-vehicle communications (V2V), vehicle-to-infrastructure communications (V2I), vehicle-to-network communications (V2N), vehicle-to-pedestrian communications (V2P), and vehicle-to-other entity communications. Specific services and applications in these categories include, for example, surveillance and sensor networks, device remote control, two-way remote control, personal cloud computing, video streaming, wireless cloud office, first responder connectivity, automotive emergency calls, disaster alerts, real-time gaming, multi-player video calls, autonomous driving, augmented reality, haptic internet, virtual reality, home automation, robotics, and aerial drones, to name a few. All of these use cases, as well as others, are contemplated herein.

The following is a list of acronyms related to service levels and core network technologies that may appear in the following description. Unless otherwise indicated, acronyms used herein refer to corresponding terms listed below.

Definitions and abbreviations

Acronyms

Terms and definitions

Exemplary communication System and network

Fig. 1A illustrates an exemplary communication system 100 in which the systems, methods, and devices described and claimed herein may be used. The communication system 100 may include wireless transmit/receive units (WTRUs) 102a, 102b, 102c, 102d, 102e, 102f, and/or 102g, which may be referred to generically or collectively as a WTRU 102. The communication system 100 may include a Radio Access Network (RAN)103/104/105/103b/104b/105b, a core network 106/107/109, a Public Switched Telephone Network (PSTN)108, the internet 110, other networks 112, and network services 113. 113. The network services 113 may include, for example, V2X servers, V2X functions, ProSe servers, ProSe functions, IoT services, video streaming, and/or edge computing, among others.

It is to be appreciated that the concepts disclosed herein may be utilized with any number of WTRUs, base stations, networks, and/or network elements. Each of the WTRUs 102 may be any type of device or apparatus configured to operate and/or communicate in a wireless environment. In the example of fig. 1A, each of the WTRUs 102 is depicted in fig. 1A-1E as a handheld wireless communication device. It should be appreciated that with various use cases contemplated for wireless communication, each WTRU may include or be included in any type of device or apparatus configured to send and/or receive wireless signals, including by way of example only, a User Equipment (UE), a mobile station, a fixed or mobile subscriber unit, a pager, a cellular telephone, a Personal Digital Assistant (PDA), a smartphone, a laptop, a tablet, a netbook, a notebook computer, a personal computer, a wireless sensor, consumer electronics, a wearable device (such as a smart watch or smart garment), a medical or electronic health device, a robot, industrial equipment, a drone, a vehicle such as an automobile, bus or truck, a train, or an airplane, and the like.

Communication system 100 may also include base station 114a and base station 114 b. In the example of fig. 1A, each base station 114a and 114b is depicted as a single element. In practice, the base stations 114a and 114b may include any number of interconnected base stations and/or network elements. The base station 114a may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c to facilitate access to one or more communication networks, such as the core network 106/107/109, the internet 110, network services 113, and/or the other networks 112. Similarly, the base station 114b may be any type of device configured to interface with at least one of Remote Radio Heads (RRHs) 118a, 118b, Transmission and Reception Points (TRPs) 119a, 119b, and/or roadside units (RSUs) 120a and 120b, wired and/or wirelessly, to facilitate access to one or more communication networks, such as a core network 106/107/109, the internet 110, other networks 112, and/or network services 113. The RRHs 118a, 118b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102, e.g., WTRU 102c, to facilitate access to one or more communication networks, such as the core network 106/107/109, the internet 110, network services 113, and/or other networks 112.

The TRPs 119a, 119b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102d to facilitate access to one or more communication networks, such as the core network 106/107/109, the internet 110, network services 113, and/or the other networks 112. The RSUs 120a and 120b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102e or 102f to facilitate access to one or more communication networks, such as the core network 106/107/109, the internet 110, the other networks 112, and/or the network services 113. For example, the base stations 114a, 114B may be Base Transceiver Stations (BTSs), Node-Bs, eNode Bs, home Node Bs, home eNode Bs, next generation Node-Bs (gNode Bs), satellites, site controllers, Access Points (APs), wireless routers, and the like.

The base station 114a may be part of the RAN 103/104/105, and the RAN 103/104/105 may also include other base stations and/or network elements (not shown), such as Base Station Controllers (BSCs), Radio Network Controllers (RNCs), relay nodes, and so forth. Similarly, base station 114b may be part of RAN 103b/104b/105b, and RAN 103b/104b/105b may also include other base stations and/or network elements (not shown), such as BSCs, RNCs, relay nodes, and so forth. The base station 114a may be configured to transmit and/or receive wireless signals within a particular geographic area, which may be referred to as a cell (not shown). Similarly, base station 114b may be configured to transmit and/or receive wired and/or wireless signals within a particular geographic area, which may be referred to as a cell (not shown). The cell may be further divided into cell sectors. For example, the cell associated with base station 114a may be divided into three sectors. Thus, for example, the base station 114a may include three transceivers, e.g., one transceiver per sector of a cell. The base station 114a may employ multiple-input multiple-output (MIMO) techniques, and thus, for example, multiple transceivers may be used for each sector of a cell.

The base station 114a may communicate with one or more of the WTRUs 102a, 102b, 102c, and 102g over an air interface 115/116/117, and the air interface 115/116/117 may be any suitable wireless communication link (e.g., Radio Frequency (RF), microwave, Infrared (IR), Ultraviolet (UV), visible, cmWave, mmWave, etc.). The air interface 115/116/117 may be established using any suitable Radio Access Technology (RAT).

Base station 114b may communicate with one or more of RRHs 118a and 118b, TRPs 119a and 119b, and/or RSUs 120a and 120b over wired or air interfaces 115b/116b/117b, air interfaces 115b/116b/117b may be any suitable wired (e.g., cable, fiber, etc.) or wireless communication links (e.g., RF, microwave, IR, UV, visible light, cmWave, mmWave, etc.). Air interfaces 115b/116b/117b may be established using any suitable RAT.

The RRHs 118a, 118b, TRPs 119a, 119b and/or RSUs 120a, 120b may communicate with one or more of the WTRUs 102c, 102d, 102e, 102f over the air interfaces 115c/116c/117c, which air interfaces 115c/116c/117c may be any suitable wireless communication links (e.g., RF, microwave, IR, ultraviolet UV, visible, cmWave, mmWave, etc.).

Air interfaces 115c/116c/117c may be established using any suitable RAT.

WTRUs 102 may communicate with one another via direct air interfaces 115d/116d/117d, such as sidelink communications, and air interfaces 115d/116d/117d may be any suitable wireless communication link (e.g., RF, microwave, IR, ultraviolet UV, visible, cmWave, mmWave, etc.).

Air interfaces 115d/116d/117d may be established using any suitable RAT.

Communication system 100 may be a multiple-access system and may employ one or more channel access schemes such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like. For example, the base station 114a in the RAN 103/104/105 and the WTRUs 102a, 102b, 102c, or the RRHs 118a, 118b, TRPs 119a, 119b in the RAN 103b/104b/105b and/or the RSUs 120a and 120b and the WTRUs 102c, 102d, 102e, and 102f may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) terrestrial radio access (UTRA), which may establish the air interfaces 115/116/117 and/or 115c/116c/117c, respectively, using wideband cdma (wcdma). WCDMA may include communication protocols such as High Speed Packet Access (HSPA) and/or evolved HSPA (HSPA +). HSPA may include High Speed Downlink Packet Access (HSDPA) and/or High Speed Uplink Packet Access (HSUPA).

The base station 114a in the RAN 103/104/105 and the WTRUs 102a, 102b, 102c, and 102g, or the RRHs 118a and 118b, TRPs 119a and 119b, and/or RSUs 120a and 120b in the RAN 103b/104b/105b and the WTRUs 102c, 102d may implement a radio technology such as evolved UMTS terrestrial radio access (E-UTRA), which may establish the air interface 115/116/117 or 115c/116c/117c, respectively, using, for example, Long Term Evolution (LTE) and/or LTE-Advanced (LTE-a). Air interfaces 115/116/117 or 115c/116c/117c may implement 3GPP NR technology. LTE and LTE-a technologies may include LTE D2D and/or V2X technologies and interfaces (such as sidelink communications, etc.). Similarly, 3GPP NR technology may include NR V2X technology and interfaces (such as sidelink communications, etc.).

The base station 114a in the RAN 103/104/105 and the WTRUs 102a, 102b, 102c, and 102g, or the RRHs 118a and 118b, RPs 119a and 119b, and/or RSUs 120a and 120b in the RAN 103b/104b/105b and the WTRUs 102c, 102d, 102e, and 102f may implement radio technologies such as IEEE 802.16 (e.g., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA 20001X, CDMA2000 EV-DO, temporary Standard 2000(IS-2000), temporary Standard 95(IS-95), temporary Standard 856(IS-856), Global System for Mobile communications (GSM), enhanced data rates for GSM evolution (EDGE), GSM EDGE (GERAN), and so forth.

The base station 114c in fig. 1A may be, for example, a wireless router, a home node B, a home eNode B, or an access point, and may utilize any suitable RAT to facilitate wireless connectivity in a local area, such as a commercial venue, home, vehicle, train, airplane, satellite, factory, campus, etc. The base station 114c and the WTRU 102, e.g., WTRU 102e, may implement a radio technology such as IEEE 802.11 to establish a Wireless Local Area Network (WLAN). Similarly, the base station 114c and the WTRU 102, e.g., WTRU 102d, may implement a radio technology such as IEEE 802.15 to establish a Wireless Personal Area Network (WPAN). The base station 114c and the WTRU 102, e.g., WTRU 102e, may establish a pico cell or a femto cell using a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE-A, NR, etc.). As shown in fig. 1A, base station 114c may have a direct connection to the internet 110. Thus, the base station 114c may not be required to access the internet 110 via the core network 106/107/109.

The RAN 103/104/105 and/or the RANs 103b/104b/105b may be in communication with a core network 106/107/109, and the core network 106/107/109 may be any type of network configured to provide voice, data, messaging, authorization and authentication, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102. For example, the core network 106/107/109 may provide call control, billing services, mobile location-based services, prepaid calls, internet connectivity, packet data network connectivity, ethernet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication.

Although not shown in fig. 1A, it is to be appreciated that the RAN 103/104/105 and/or the RANs 103b/104b/105b and/or the core network 106/107/109 may communicate directly or indirectly with other RANs that employ the same RAT as the RAN 103/104/105 and/or the RANs 103b/104b/105b or different RATs. For example, in addition to connecting to RAN 103/104/105 and/or RAN 103b/104b/105b, which may utilize E-UTRA radio technology, core network 106/107/109 may also communicate with another RAN (not shown) that employs GSM or NR radio technology.

The core network 106/107/109 may also serve as a gateway for the WTRU 102 to access the PSTN 108, the internet 110, and/or other networks 112. The PSTN 108 may include a circuit-switched telephone network that provides Plain Old Telephone Service (POTS). The internet 110 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Protocol (IP) in the TCP/IP internet protocol suite. Other networks 112 may include wired or wireless communication networks owned and/or operated by other service providers. For example, the network 112 may include any type of packet data network (e.g., IEEE 802.3 ethernet) or another core network connected to one or more RANs, which may employ the same RAT as the RAN 103/104/105 and/or the RANs 103b/104b/105b or a different RAT.

Some or all of the WTRUs 102a, 102b, 102c, 102d, 102e, and 102f in the communication system 100 may include multi-mode capabilities, e.g., the WTRUs 102a, 102b, 102c, 102d, 102e, and 102f may include multiple transceivers for communicating with different wireless networks over different wireless links. For example, the WTRU 102g shown in fig. 1A may be configured to communicate with a base station 114a, which may employ a cellular-based radio technology, and with a base station 114c, which may employ an IEEE 802 radio technology.

Although not shown in fig. 1A, it is to be appreciated that the user device may establish a wired connection to the gateway. The gateway may be a Residential Gateway (RG). The RG may provide connectivity to a core network 106/107/109. It is to be appreciated that many of the concepts contained herein may be equally applied to UEs that are WTRUs and UEs that use wired connections to connect to a network. For example, the concepts applied to wireless interfaces 115, 116, 117 and 115c/116c/117c may be equally applied to wired connections.

Fig. 1B is a system diagram illustrating RAN 103 and core network 106. As described above, the RAN 103 may communicate with the WTRUs 102a, 102b, and 102c over the air interface 115 using UTRA radio technology. RAN 103 may also communicate with core network 106. As shown in fig. 1B, the RAN 103 may include Node-bs 140a, 140B, and 140c, and the Node-bs 140a, 140B, and 140c may each include one or more transceivers for communicating with the WTRUs 102a, 102B, and 102c over the air interface 115. The Node-bs 140a, 140B, and 140c may each be associated with a particular cell (not shown) within the RAN 103. The RAN 103 may also include RNCs 142a, 142 b. It is to be appreciated that the RAN 103 can include any number of Node-Bs and Radio Network Controllers (RNCs).

As shown in FIG. 1B, the Node-Bs 140a, 140B may communicate with the RNC 142 a. In addition, Node-B140 c may communicate with RNC 142B. Node bs 140a, 140B, and 140c may communicate with respective RNCs 142a and 142B via an Iub interface. RNCs 142a and 142b may communicate with each other via an Iur interface. Each of the RNCs 142a and 142B may be configured to control the respective Node-Bs 140a, 140B, and 140c to which it is connected. In addition, each of the RNCs 142a and 142b may be configured to perform or support other functions such as outer loop power control, load control, admission control, packet scheduling, handoff control, macro diversity, security functions, data encryption, and the like.

The core network 106 shown in fig. 1B may include a Media Gateway (MGW)144, a Mobile Switching Center (MSC)146, a Serving GPRS Support Node (SGSN)148, and/or a Gateway GPRS Support Node (GGSN) 150. While each of the above elements are depicted as part of the core network 106, it is to be appreciated that any of these elements may be owned and/or operated by an entity other than the core network operator.

RNC 142a in RAN 103 may be connected to MSC 146 in core network 106 via an IuCS interface. MSC 146 may be connected to MGW 144. The MSC 146 and the MGW 144 may provide the WTRUs 102a, 102b, and 102c with access to a circuit-switched network, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, and 102c and conventional landline communication devices.

The RNC 142a in the RAN 103 may also be connected to the SGSN 148 in the core network 106 via an IuPS interface. The SGSN 148 may be coupled to a GGSN 150. The SGSN 148 and GGSN 150 may provide the WTRUs 102a, 102b, and 102c with access to a packet-switched network, such as the internet 110, to facilitate communications between the WTRUs 102a, 102b, and 102c and IP-enabled devices.

The core network 106 may also be connected to other networks 112, and the other networks 112 may include other wired or wireless networks owned and/or operated by other service providers.

Fig. 1C is a system diagram illustrating RAN 104 and core network 107. As described above, the RAN 104 may communicate with the WTRUs 102a, 102b, and 102c over the air interface 116 using E-UTRA radio technology. RAN 104 may also communicate with a core network 107.

RAN 104 may include eNode-bs 160a, 160B, and 160c, although it is appreciated that RAN 104 may include any number of eNode-bs. The eNode-bs 160a, 160B and 160c may each include one or more transceivers for communicating with the WTRUs 102a, 102B and 102c over the air interface 116. For example, eNode-bs 160a, 160B, and 160c may implement MIMO technology. Thus, for example, eNode-B160 a may use multiple antennas to transmit wireless signals to WTRU 102a and receive wireless signals from WTRU 102 a.

each of eNode-bs 160a, 160B, and 160c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the uplink and/or downlink, and the like. As shown in fig. 1C, eNode-bs 160a, 160B, and 160C can communicate with each other over an X2 interface.

The core network 107 shown in fig. 1C may include a mobility management gateway (MME)162, a serving gateway 164, and a Packet Data Network (PDN) gateway 166. While each of the above elements are depicted as part of the core network 107, it is to be appreciated that any of these elements may be owned and/or operated by an entity other than the core network operator.

MME 162 may be connected to each of eNode-bs 160a, 160B, and 160c in RAN 104 via an S1 interface and may serve as a control node. For example, the MME 162 may be responsible for authenticating users of the WTRUs 102a, 102b, and 102c, bearer activation/deactivation, selecting a particular serving gateway during initial attachment of the WTRUs 102a, 102b, and 102c, and the like. The MME 162 may also provide a control plane function for switching between the RAN 104 and other RANs (not shown) that employ other radio technologies, such as GSM or WCDMA.

Serving gateway 164 may be connected to each of eNode-bs 160a, 160B, and 160c in RAN 104 via an S1 interface. The serving gateway 164 may generally route and forward user data packets to and from the WTRUs 102a, 102b, and 102 c. The serving gateway 164 may also perform other functions such as anchoring the user plane during inter-eNode B handovers, triggering paging when downlink data is available to the WTRUs 102a, 102B, and 102c, managing and storing the context of the WTRUs 102a, 102B, and 102c, and the like.

The serving gateway 164 may also be connected to a PDN gateway 166, which the PDN gateway 166 may provide the WTRUs 102a, 102b, and 102c with access to a packet-switched network, such as the internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.

The core network 107 may facilitate communication with other networks. For example, the core network 107 may provide the WTRUs 102a, 102b, and 102c with access to a circuit-switched network, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, and 102c and conventional landline communication devices. For example, the core network 107 may include or may communicate with an IP gateway (e.g., an IP Multimedia Subsystem (IMS) server) that serves as an interface between the core network 107 and the PSTN 108. In addition, the core network 107 may provide the WTRUs 102a, 102b, and 102c with access to the network 112, which network 112 may include other wired or wireless networks owned and/or operated by other service providers.

Fig. 1D is a system diagram illustrating RAN 105 and core network 109. RAN 105 may communicate with WTRUs 102a and 102b over air interface 117 using NR radio technology. RAN 105 may also communicate with core network 109. A non-3 GPP interworking function (N3IWF)199 may communicate with the WTRU 102c over the air interface 198 using non-3 GPP radio technology. The N3IWF 199 may also communicate with the core network 109.

RAN 105 may include enode-bs 180a and 180B. It is to be appreciated that RAN 105 may include any number of enode-bs. The enode-bs 180a and 180B may each include one or more transceivers for communicating with the WTRUs 102a and 102B over the air interface 117. When integrated access and backhaul connections are used, the same air interface may be used between the WTRU and the enode-B, which may be the core network 109 via one or more gnbs. The gNode-Bs 180a and 180B may implement MIMO, MU-MIMO, and/or digital beamforming techniques. Thus, the enode-B180 a may, for example, use multiple antennas to transmit wireless signals to the WTRU 102a and receive wireless signals from the WTRU 102 a. It should be appreciated that RAN 105 may employ other types of base stations, such as eNode-bs. It is also to be appreciated that RAN 105 may employ more than one type of base station. For example, the RAN may employ eNode-B and gNode-B.

The N3IWF 199 may include a non-3 GPP access point 180 c. It is to be appreciated that the N3IWF 199 may include any number of non-3 GPP access points. Non-3 GPP access point 180c may include one or more transceivers for communicating with WTRU 102c over air interface 198. Non-3 GPP access point 180c may communicate with WTRU 102c over air interface 198 using 802.11 protocols.

The enode-bs 180a and 180B may each be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the uplink or downlink, and the like. As shown in FIG. 1D, the gNode-Bs 180a and 180B may communicate with each other over an Xn interface, for example.

The core network 109 shown in fig. 1D may be a 5G core network (5 GC). The core network 109 may provide numerous communication services to clients interconnected by radio access networks. The core network 109 includes a plurality of entities that perform the functions of the core network. The term "core network entity" or "network function" as used herein refers to any entity that performs one or more functions of the core network. It should be understood that such core network entities may be logical entities implemented in the form of computer-executable instructions (software) stored in a memory of a device configured for wireless and/or network communication, or a computer system, such as the system illustrated in fig. 1G, and executed on a processor thereof.

In the example of fig. 1D, the 5G core network 109 may include an access and mobility management function (AMF)172, a Session Management Function (SMF)174, User Plane Functions (UPFs) 176a and 176b, a user data management function (UDM)197, an authentication server function (AUSF)190, a network open function (NEF)196, a Policy Control Function (PCF)184, a non-3 GPP interworking function (N3IWF)199, a User Data Repository (UDR) 178. Although each of the above elements are depicted as part of the 5G core network 109, it is to be appreciated that any of these elements may be owned and/or operated by an entity other than the core network operator. It is also appreciated that the 5G core network may not be made up of all of these elements, may be made up of additional elements, and may be made up of multiple instances of each of these elements. Fig. 1D shows the network functions directly connected to each other, but it should be appreciated that they may communicate via a routing agent, such as a diameter routing agent or a message bus.

In the example of fig. 1D, connectivity between network functions is achieved via a set of interfaces or reference points. It is to be appreciated that a network function may be emulated, described, or implemented as a set of services that are enabled (invoke) or invoked (call) by other network functions or services. Enablement of network function services may be accomplished via a direct connection between network functions, exchange of messaging over a message bus, invoking a software function, and so forth.

The AMF 172 may be connected to the RAN 105 via an N2 interface and may serve as a control node. For example, the AMF 172 may be responsible for registration management, connection management, reachability management, access authentication, access authorization. The AMF may be responsible for forwarding user plane tunnel configuration information to the RAN 105 via the N2 interface. The AMF 172 may receive user plane tunnel configuration information from the SMF via an N11 interface. The AMF 172 may generally route and forward NAS packets to and from the WTRUs 102a, 102b, and 102c via an N1 interface. The N1 interface is not shown in fig. 1D.

The SMF 174 may be connected to the AMF 172 via an N11 interface. Similarly, SMFs may connect to PCF 184 via an N7 interface and to UPFs 176a and 176b via an N4 interface. SMF 174 may serve as a control node. For example, the SMF 174 may be responsible for session management, IP address assignment to the WTRUs 102a, 102b, and 102c, management and configuration of traffic steering rules in the UPF 176a and the UPF 176b, and generation of downlink data notifications to the AMF 172.

The UPFs 176a and 176b may provide the WTRUs 102a, 102b, and 102c with access to a Packet Data Network (PDN), such as the internet 110, to facilitate communications between the WTRUs 102a, 102b, and 102c and other devices. UPFs 176a and 176b may also provide WTRUs 102a, 102b, and 102c with access to other types of packet data networks. For example, the other network 112 may be an ethernet network or any type of network that switches data packets. UPFs 176a and 176b may receive traffic steering rules from SMF 174 via an N4 interface. The UPFs 176a and 176b may provide access to the packet data network by interfacing with the packet data network using an N6 interface, or by interfacing with each other or other UPFs via an N9 interface. In addition to providing access to the packet data network, the UPF 176 may also be responsible for packet routing and forwarding, policy rule enforcement, quality of service processing for user plane traffic, downlink packet buffering.

The AMF 172 may also be connected to the N3IWF 199, for example, via an N2 interface. The N3IWF facilitates connectivity between, for example, the WTRU 102c and the 5G core network 170 via radio interface technologies not defined by 3 GPP. The AMF may interact with the N3IWF 199 in the same or similar manner as it interacts with the RAN 105.

PCF 184 may be connected to SMF 174 via an N7 interface, to AMF 172 via an N15 interface, and to an Application Function (AF)188 via an N5 interface. The N15 and N5 interfaces are not shown in fig. 1D. PCF 184 may provide policy rules to control plane nodes, such as AMF 172 and SMF 174, allowing the control plane nodes to enforce the rules. PCF 184 may send the policies to AMF 172 for WTRUs 102a, 102b, and 102c so that the AMF may deliver the policies to WTRUs 102a, 102b, and 102c via an N1 interface. The policy may then be implemented or applied at the WTRUs 102a, 102b, and 102 c.

UDR 178 may serve as a repository for authentication credentials and subscription information. The UDR may be connected to a network function so that the network function can add data to, read data from, and modify data in the repository. For example, UDR 178 may connect to PCF 184 via an N36 interface. Similarly, UDR 178 may be connected to NEF 196 via an N37 interface, and UDR 178 may be connected to UDM 197 via an N35 interface.

UDM 197 may serve as an interface between UDR 178 and other network functions. UDM 197 may authorize network functions to access UDR 178. For example, UDM 197 may be connected to AMF 172 via an N8 interface, and UDM 197 may be connected to SMF 174 via an N10 interface. Similarly, UDM 197 may be connected to AUSF 190 via an N13 interface. UDR 178 and UDM 197 may be tightly integrated together.

The AUSF 190 performs authentication-related operations, connects to the UDM 178 via an N13 interface, and connects to the AMF 172 via an N12 interface.

NEF 196 opens capabilities and services in 5G core network 109 to Application Function (AF) 188. The opening may occur over the N33 API interface. The NEF may connect to the AF 188 via an N33 interface and it may connect to other network functions in order to open up the capabilities and services of the 5G core network 109.

The application function 188 may interact with network functions in the 5G core network 109. Interaction between the application function 188 and the network function may occur via a direct interface or may occur via the NEF 196. The application function 188 may be considered part of the 5G core network 109 or may be external to the 5G core network 109 and deployed by an enterprise having a business relationship with a mobile network operator.

Network slicing is a mechanism that can be used by mobile network operators to support one or more ' virtual ' core networks behind the operators ' air interfaces. This involves 'slicing' the core network into one or more virtual networks to support different RANs, or different service types running throughout a single RAN. Network slicing enables operators to create customized networks to provide optimized solutions for different market scenarios requiring diverse requirements in terms of, for example, functionality, performance, and isolation.

The 3GPP has designed a 5G core network to support network slicing. Network slicing is a good tool that network operators can use to support a diverse set of 5G use cases (e.g., large-scale IoT, critical communications, V2X, and enhanced mobile broadband) that require very diverse and sometimes extreme requirements. Without the use of network slicing techniques, when each use case has its own specific set of performance, scalability and availability requirements, it is likely that the network architecture is not flexible and scalable enough to effectively support the broader use case requirements. Furthermore, the introduction of new network services should be made more efficient.

Referring again to fig. 1D, in a network slice scenario, the WTRU 102a, 102b or 102c may connect to the AMF 172 via an N1 interface. The AMF 172 may logically be part of one or more slices. The AMF 172 may coordinate the connectivity or communication of the WTRUs 102a, 102b, or 102c with one or more UPFs 176a and 176b, SMFs 174, and other network functions. Each of UPFs 176a and 176b, SMF 174, and other network functions may be part of the same slice or different slices. When they are part of different slices, they may be isolated from each other in the sense that they may utilize different computing resources, security credentials, etc.

The core network 109 may facilitate communication with other networks. For example, the core network 109 may include, or may communicate with, an IP gateway, such as an IP Multimedia Subsystem (IMS) server, that serves as an interface between the 5G core network 109 and the PSTN 108. For example, the core network 109 may include or communicate with a Short Message Service (SMS) service center that facilitates communication via a short message service. For example, the 5G core network 109 may facilitate the exchange of non-IP data packets between the WTRUs 102a, 102b, 102c and the server or application function 188. In addition, the core network 170 may provide WTRUs 102a, 102b, and 102c with access to the network 112, which network 112 may include other wired or wireless networks owned or operated by other service providers.

The core network entities described herein and illustrated in fig. 1A, 1C, 1D and 1E are identified using names assigned to these entities in certain existing 3GPP specifications, although it is understood that in the future, these entities and functions may be identified by other names, and that in future specifications published by 3GPP, including future 3GPP NR specifications, certain entities or functions may be combined. Thus, the particular network entities and functions described and illustrated in fig. 1A, 1B, 1C, 1D, and 1E are provided as examples only, and it is to be understood that the subject matter disclosed and claimed herein may be implemented or realized in any similar communication system, whether presently defined or defined in the future.

FIG. 1E illustrates an exemplary communication system 111 in which the systems, methods, and devices described herein may be used. The communication system 111 may include a wireless transmit/receive unit (WTRU) A, B, C, D, E, F, a base station gNB 121, a V2X server 124, and roadside units (RSUs) 123a and 123 b. In practice, the concepts presented herein may be applied to any number of WTRUs, base stations gNB, V2X networks, and/or other network elements. One or several or all of the WTRUs a, B, C, D, E, and F may be outside the range of the access network coverage area 122. WTRUs a, B, and C form a V2X group, where WTRU a is the group leader and WTRUs B and C are the group members.

WTRUs a, B, C, D, E, and F may communicate with each other over Uu interface 129B via the gNB 121 if they are within access network coverage (in fig. 1E, only B and F are shown within network coverage). WTRUs a, B, C, D, E, and F may communicate directly with each other via sidelink (PC5 or NR PC5) interfaces 125a, 125b, 128 if they are within or outside access network coverage (e.g., in fig. 1E, A, C, D and E are shown outside network coverage, and WTRUs a, B, C, D, E and F may communicate with each other).

WTRUs a, B, C, D, E, and F may communicate with RSUs 123a or 123b via a vehicle-to-network (V2N)126 or sidelink interface 125 b. WTRUs a, B, C, D, E, and F may communicate with a V2X server 124 via a vehicle-to-infrastructure (V2I) interface 127. WTRUs a, B, C, D, E, and F may communicate with another UE via a vehicle-to-pedestrian (V2P) interface 128.

Fig. 1F is a block diagram of an example device or apparatus WTRU 102 that may be configured for wireless communication and operation, such as the WTRU 102 of fig. 1A, 1B, 1C, 1D, or 1E, in accordance with the systems, methods, and apparatus described herein. As shown in fig. 1F, an exemplary WTRU 102 may include a processor 118, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad/indicator 128, non-removable memory 130, removable memory 132, a power supply 134, a Global Positioning System (GPS) chipset 136, and other peripherals 138. It is to be appreciated that the WTRU 102 may include any subcombination of the above elements. In addition, the base stations 114a and 114B and/or the nodes that the base stations 114a and 114B may represent, such as but not limited to, Base Transceiver Stations (BTSs), Node-bs, site controllers, Access Points (APs), home Node-bs, evolved home Node-bs (enodebs), home evolved Node-bs (henbs), home evolved Node-B gateways, next generation Node-bs (gNode-bs), and proxy nodes, etc., may include some or all of the elements depicted in fig. 1F and described herein.

The processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a Digital Signal Processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of Integrated Circuit (IC), a state machine, or the like. The processor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functions that enable the WTRU 102 to operate in a wireless environment. The processor 118 may be coupled to a transceiver 120, and the transceiver 120 may be coupled to a transmit/receive element 122. Although fig. 1F depicts the processor 118 and the transceiver 120 as separate components, it is to be appreciated that the processor 118 and the transceiver 120 may be integrated together in an electronic package or chip.

The transmit/receive element 122 of the UE may be configured to transmit signals to or receive signals from a base station (e.g., base station 114a of fig. 1A) over air interface 115/116/117, or to transmit signals to or receive signals from another UE over air interface 115d/116d/117 d. For example, transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals. The transmit/receive element 122 may be, for example, an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals. The transmit/receive element 122 may be configured to transmit and receive both RF signals and optical signals. It is to be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals or wired signals.

Additionally, although transmit/receive element 122 is depicted as a single element in fig. 1F, WTRU 102 may include any number of transmit/receive elements 122. More specifically, the WTRU 102 may employ MIMO technology. Thus, the WTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 115/116/117.

Transceiver 120 may be configured to modulate signals to be transmitted by transmit/receive element 122 and demodulate signals received by transmit/receive element 122. As described above, the WTRU 102 may have multi-mode capabilities. Thus, the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as NR and IEEE 802.11 or NR and E-UTRA, or to communicate using the same RAT via multiple beams to different RRHs, TRPs, RSUs, or nodes.

The processor 118 of the WTRU 102 may be coupled to and may receive user input data from a speaker/microphone 124, a keypad 126, and/or a display/touchpad/indicator 128, such as a Liquid Crystal Display (LCD) display unit or an Organic Light Emitting Diode (OLED) display unit. The processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad/pointer 128. In addition, processor 118 may access information from, and store data in, any type of suitable memory, such as non-removable memory 130 and/or removable memory 132. The non-removable memory 130 may include Random Access Memory (RAM), Read Only Memory (ROM), a hard disk, or any other type of storage device. The removable memory 132 may include a Subscriber Identity Module (SIM) card, a memory stick, a Secure Digital (SD) memory card, and the like. The processor 118 may access information from, and store data in, a memory that is not physically located in the WTRU 102, such as a memory hosted on a server in the cloud or in an edge computing platform, or in a home computer (not shown).

The processor 118 may obtain power from the power source 134 and may be configured to distribute power to other components in the WTRU 102 and/or control power to other components in the WTRU 102. The power source 134 may be any suitable device for powering the WTRU 102. For example, the power source 134 may include one or more dry cell batteries, solar cells, fuel cells, or the like.

The processor 118 may also be coupled to a GPS chipset 136, which the GPS chipset 136 may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102. In addition to or instead of information from the GPS chipset 136, the WTRU 102 may receive location information from base stations (e.g., base stations 114a, 114b) over the air interface 115/116/117 and/or determine its location based on the timing of signals received from two or more nearby base stations. It is to be appreciated that the WTRU 102 may acquire location information by any suitable location determination method.

The processor 118 may also be coupled to other peripherals 138, which peripherals 138 may include one or more software and/or hardware modules that provide additional features, functionality, and/or wired or wireless connectivity. For example, peripheral devices 138 may include various sensors such as accelerometers, biometric (e.g., fingerprint) sensors, electronic compasses, satellite transceivers, digital cameras (for photos or video), Universal Serial Bus (USB) ports or other interconnection interfaces, vibration devices, television transceivers, hands-free headsets, portable devices, and the like,A module, a Frequency Modulation (FM) radio unit, a digital music player, a media player, a video game player module, an internet browser, etc.

The WTRU 102 may be included in other devices or apparatuses such as sensors, consumer electronics, wearable apparatuses such as smart watches or smart clothing, medical or electronic health apparatuses, robots, industrial equipment, drones, vehicles such as cars, trucks, trains, or airplanes. The WTRU 102 may connect to other components, modules, or systems of such a device or apparatus via one or more interconnect interfaces, such as an interconnect interface that may include one of the peripherals 138.

Fig. 1G is a block diagram of an exemplary computing system 90 that may comprise one or more devices of the communication networks illustrated in fig. 1A, 1C, 1D, and 1E, such as some nodes or functional entities in RAN 103/104/105, core network 106/107/109, PSTN 108, internet 110, other networks 112, or network services 113. The computing system 90 may comprise a computer or server and may be controlled primarily by computer readable instructions, which may be in the form of software, wherever or in any manner such software is stored or accessed. Such computer readable instructions may be executed within processor 91 to cause computing system 90 to operate. The processor 91 may be a general-purpose processor, a special purpose processor, a conventional processor, a Digital Signal Processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of Integrated Circuit (IC), a state machine, or the like. The processor 91 may perform signal coding, data processing, power control, input/output processing, and/or any other functions that enable the computing system 90 to operate in a communication network. Coprocessor 81 is an optional processor other than main processor 91 that may perform additional functions or assist processor 91. Processor 91 and/or coprocessor 81 may receive, generate, and process data associated with the methods and apparatus disclosed herein.

In operation, processor 91 fetches, decodes, and executes instructions and transfers information to and from other resources via the computing system's main data transfer path, system bus 80. Such a system bus connects the components in computing system 90 and defines the medium for data exchange. The system bus 80 typically includes data lines for sending data, address lines for sending addresses, and control lines for sending interrupts and for operating the system bus. An example of such a system bus 80 is a PCI (peripheral component interconnect) bus.

The memory coupled to system bus 80 includes Random Access Memory (RAM)82 and Read Only Memory (ROM) 93. Such memories include circuitry that allows information to be stored and retrieved. The ROM 93 typically contains stored data that is not easily modified. The data stored in the RAM 82 may be read or changed by the processor 91 or other hardware devices. Access to the RAM 82 and/or ROM 93 may be controlled by a memory controller 92. The memory controller 92 may provide address translation functionality that translates virtual addresses to physical addresses when instructions are executed. The memory controller 92 may also provide memory protection functions that isolate processes within the system and isolate system processes from user processes. Thus, a program running in the first mode can only access memory mapped by its own process virtual address space; unless memory sharing between processes is set, it cannot access memory within the virtual address space of another process.

In addition, the computing system 90 may include a peripheral device controller 83, the peripheral device controller 83 being responsible for communicating instructions from the processor 91 to peripheral devices, such as a printer 94, a keyboard 84, a mouse 95, and a disk drive 85.

The display 86, controlled by a display controller 96, is used to display visual output generated by the computing system 90. Such visual output may include text, graphics, animated graphics, and video. The visual output may be provided in the form of a Graphical User Interface (GUI). The display 86 may be implemented with a CRT-based video display, an LCD-based flat panel display, a gas plasma-based flat panel display, or a touch panel. The display controller 96 includes the electronic components necessary to generate the video signals sent to the display 86.

In addition, the computing system 90 may include communication circuitry, such as a wireless or wired network adapter 97, that may be used to connect the computing system 90 to external communication networks or devices, such as the RAN 103/104/105, core network 106/107/109, PSTN 108, internet 110, WTRU 102, or other networks 112 of fig. 1A, 1B, 1C, 1D, and 1E, to enable the computing system 90 to communicate with other nodes or functional entities of the networks. The communication circuitry may be used to perform the transmitting and receiving steps of certain devices, nodes or functional entities described herein, either alone or in combination with the processor 91.

It should be appreciated that any or all of the devices, systems, methods, and processes described herein may be embodied in the form of computer-executable instructions (e.g., program code) stored on a computer-readable storage medium, which when executed by a processor, such as processor 118 or 91, cause the processor to perform and/or implement the systems, methods, and processes described herein. In particular, any of the steps, operations, or functions described herein may be implemented in the form of such computer-executable instructions executed on a processor of a device or computing system configured for wireless and/or wired network communication. Computer-readable storage media include volatile and nonvolatile, removable and non-removable media implemented in any non-transitory (e.g., tangible or physical) method or technology for storage of information, although such computer-readable storage media do not include signals. Computer-readable storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other tangible or physical medium which can be used to store the desired information and which can be accessed by a computing system.

The asset tracking use case requires that the UE operate worldwide and be reachable throughout its transit to provide tracking management of the subject asset. These UEs can be attached to containers and have an expected service life of 10-15 years. The UE may be a device with a fixed battery, so that efficient power consumption is a critical requirement. Containers may not always have connectivity to the core network due to poor mobility and signal strength. As a result, asset tracking UEs may need to rely on other UEs to aggregate their data and relay to the core network to optimize power consumption.

The present disclosure proposes the following mechanisms to meet the requirements of asset tracking UEs:

the UE provides a capability indication to the core network to indicate support for relay aggregation. The core network then provides the relay UE with data aggregation parameters for aggregating and relaying the UE data.

The relay UE provides an indication to the core network to create a shared PDU session for sending aggregated data in the user plane. The core network, in turn, configures the UPF and RAN nodes to allocate resources for tunneling data over the shared PDU session.

The relay UE determines whether to aggregate data from other UEs using the data aggregation parameter. The relay UE may also wait a specified aggregation period to allow other UEs to transmit their data.

The relay UE aggregates data from other UEs and sends them on a shared PDU session. The RAN node and the UPF route data on the shared PDU session to one or more PSA UPFs.

UPF supports aggregating DL data from one or more PSA UPFs and sending the aggregated data to the relay UE.

Relaying the UE and RAN node to communicate with each other so that the RAN node efficiently allocates radio resources for the UE to send delay tolerant data to the core network.

5G cellular network

Fig. 2 shows a 3GPP 5G non-roaming system architecture in which the various entities interact with each other over the indicated reference points. A User Equipment (UE) device 102 may send and receive data to and from a Data Network (DN)202, such as the internet, through a Radio Access Network (RAN)105 and a User Plane Function (UPF) entity 176 a. This data path is referred to as the 5G User Plane (UP).

Data traffic from the UE102 is sent over the PDU session created in the core network. The following network functions play a role in PDU session management within the core network 106/107/109. Fig. 2 also includes a Network Slice Selection Function (NSSF) 204.

Access and Mobility Function (AMF) 172: the UE102 sends an N1 message to the AMF 172 through the RAN node 105 to initially establish a PDU session in the core network. The AMF 172 selects the appropriate SMF 174 to process the PDU session setup request.

Session Management Function (SMF) 174: SMF 174 is responsible for creating PDU sessions and may contact UDM 197 for subscription information and PCF 184 for policy information for use in processing PDU session establishment requests. SMF 174 may also communicate with UPF 176a and RAN node 105 to establish tunneling information that may be used to route data from UE102 to DN 202, and from DN 202 back to UE 102.

Policy and Control Function (PCF) 184: PCF 184 performs authorization and policy decisions for establishing a PDU session.

User Plane Function (UPF)176 a: UPF 176a allocates resources in the user plane to allow data traffic to flow from UE102 to DN 202 and from DN 202 back to UE 102. One or more UPFs 176a, 176b, 176c, etc. in the core network may be used to route data.

Radio Access Network (RAN) 105: RAN node 105 provides communication access from UE102 to the core network for both control plane and user plane traffic.

PDU session management in 5GC

Before sending or receiving data, the UE102 must establish a Protocol Data Unit (PDU) session with a Session Management Function (SMF)174 using a PDU session establishment procedure as shown in fig. 3A and 3B. The process establishes a PDU session for UE102 and SMF 174 configures UPF 176a and RAN node 105 to route traffic between UE102 and DN 202. UE102 then sends and receives data using the PDU session, and RAN node 105 and UPF 176a assist in forwarding data to and from DN 202. Each UE102 needs to establish its own PDU session in order to send and receive data to and from the DN 202. A brief description of this process is provided below, and a detailed description of this process can be found in TS23.502 (3GPP TS23.502, Procedures for the 5G System; V15.4.1(2019-01)) incorporated herein by reference.

Step S302: the UE102 sends a PDU session setup request to the AMF 172 over the NAS message through the RAN node 105. In the request, the UE102 includes a PDU session ID and other session parameters such as PDU session type, request type, requested SSC pattern, DNN, etc.

Steps S302, S306, S308, and S310: AMF 172 forwards the request to SMF 174, and SMF 174 obtains session management subscription data from UDM 197 and checks whether UE102 is allowed to create a PDU session. SMF 174 replies to AMF 172 with the results of the request.

Steps S312, S314a, S314b, S316, and S318: optional secondary authorization/authentication may be performed at step S312, and SMF 174 may obtain policy information from PCF 184 at steps S314a and S314 b. SMF 174 may then select a user plane resource at step S316 and generate an IP address/prefix to provide to PCF 184 at step S318.

Steps S320a and 320 b: the SMF 174 establishes user plane resources on the selected UPF 176a with rules and parameters for the PDU session.

Step S322: the SMF 174 sends information about the PDU session to the AMF 172, including N2 and N1 information to be returned to the RAN 105 and UE102, respectively.

Steps S324, S326 and S328: the AMF 172 returns N2 and N1 information to the RAN 105 and UE102, and the RAN 105 acknowledges the AMF 172 with AN N3 tunnel information for the UPF 176 a. Uplink data may now be sent from UE102 through RAN 105 to UPF 176a to DN 202.

Steps S330, S332a, and S332 b: AMF 172 forwards the AN N3 tunnel information to SMF 174, and SMF 174 then notifies UPF 176a of this information and the corresponding forwarding rules. After which UPF 176a may then send downlink data to UE102 through RAN 105.

In step S334, the AMF 172 receives an Nsmf _ pdusesion _ UpdateSMContext response. In step S336, the AMF 172 receives Nsmf _ pdusesion _ SMContextStatusNotify. At step S338, IPv6 address configuration occurs. In step S340, unsubscribe/logout is performed.

The PDU session-related information as shown in table 1 is maintained by the PCF and may be sent to the UE during registration. According to TS 23.503(3GPP TS 23.503, Policy and Charging Control frame for the 5G System; V15.4.0(2018-12)) included herein by reference, "the purpose of PDU session related Policy information is to provide Policy and Charging Control related information applicable to a single Monitoring key (Charging key) or an entire PDU session, respectively.

TABLE 1 PDU Session related policy information

UE routing strategy

A UE routing policy (URSP) is a policy maintained in the UE102 that contains a prioritized list of URSP rules. The URSP may be provided to the UE102 by the PCF, or it may be provided to the UE102 by the operator in advance. The structure of the URSP rule is shown in tables 2 and 3 and includes both a traffic descriptor and a routing descriptor (RSD) list. The UE102 evaluates the URSP rule in accordance with the rule priority and determines whether the information from the application matches the traffic descriptor of the URSP rule. If there is a match, the UE102 selects a routing descriptor according to the RSD priority value and applies it to the application traffic.

Table 2-UE routing policy rules

Table 3 routing descriptor

Single PDU session with multiple PDU session anchors

A single PDU session may incorporate multiple UPFs 176 within its data path to support selective traffic routing or Session and Service Continuity (SSC) mode 3 operation. During these situations, the UPF 176 that interfaces with DN 202 is referred to as a PDU session anchor or PSA. Another UPF 176 then routes traffic to and from each PSA and acts as a central controller that directs traffic between the PSA and the UE 102. This UPF 176 is referred to as an uplink classifier (UL CL) UPF or a Branch Point (BP) UPF, depending on how the UPF is established. UL CL UPF is established to offload traffic that matches the traffic filter provided by SMF 174, while BP UPF is established to support the IPv6 multihoming case.

Figure 4 shows the user plane architecture for the uplink classifier case. UL CL UPF 176a may interface with multiple PSAs 176b, 176c to direct traffic between UE102 and DN 202. UL CL UPF 176a is established by SMF 174 using operator policy, and UE102 is unaware of this establishment.

The branch point UPF 176a architecture shown in fig. 5 provides similar functionality as the UL CL UPF, but only applies to the IPv6 multihoming case. However, unlike the uplink classifier case, the UE102 participates in the establishment of the BP UPF 176a by providing an indication of its support for multi-homed IPv6 PDU sessions. BP UPF 176a then extends UL traffic from UE102 to the appropriate PSA 176b, 176c based on the source prefix contained in the PDU. The UE102 selects the source prefix based on routing information and preferences received from the network.

In the event that the Service and Session Continuity (SSC) mode 3 of the IPv6 multihomed PDU session changes, a BP UPF may be created to relocate the UPF to provide service continuity for the UE 102. Fig. 6A, 6B and 6C illustrate a process in which the SMF 174 initiates a UPF relocation for SSC mode 3 operation. The entire description may be in 3GPP TS23.502, Procedures for the 5G System; v15.4.1(2019-01), only the main steps relating to the establishment of BP UPF are summarized below. The UE102 has previously established a PDU session with the UPF1 and the SMF 174 determines to be reallocated from the UPF 1176 c to the UPF 2176 b to maintain service continuity in steps S600a, S600b and S600 c. The SMF 174 first establishes UPF 2176 b and then establishes BP UPF. Once the BP UPF is established, the SMF 174 notifies the UPF 1176 c, the UPF 2176 b, and the RAN 105 (via the AMF 172). Finally, the UE102 is informed of the new IPv6 prefix to use, which will route traffic to the UPF 2176 b. SMF 174 may then release the resource allocation in both UPF 1176 c and BP UPF.

Once the PDU session is established, BP or UL CL UPF and additional PSA can be added to the PDU session. Fig. 7 shows a process for performing the above-described operation. Provided herein is a summary of the process, described in detail in 3GPP TS23.502, Procedures for the 5G System; v15.4.1 (2019-01). At the start of the process (S702), the UE102 has established a PDU session with the PSA 1176 c. At step S704, SMF 174 may decide to establish a new PSA (PSA 2176 b) due to UE mobility or based on detection of a new flow. At step S706, SMF 174 then establishes a BP or UL CL UPF and provides it with information to communicate with both PSA 1176 c and PSA 2176 b. Next, at steps S708, S710, S712, the SMF 174 updates the PSA 1176 c, PSA 2176 b, and RAN 105 with information to communicate with the BP or UL CL UPF 176 a. If IPv6 multihoming is enabled, SMF 174 informs UE102 of the new IPv6 prefix for PSA 2176 b, and may reconfigure UE102 of the original IPv6 prefix for PSA 1176 c (steps S714 and S716).

Similarly, PSA and BP or UL CL UPF can be removed from the PDU session, as shown in fig. 8. Once again, an overview of the process-for a detailed process description, see 3GPP TS23.502, Procedures for the 5G System, incorporated herein by reference; v15.4.1 (2019-01). The process starts with a PDU session with an established BP or UL CLU PF 176a and two PSAs 176b, 176c (step S802). SMF 174 determines to remove PSA 1176 c based on, for example, UE mobility or flow termination (step S804). If IPv6 multi-homing is active, SMF 174 reconfigures UE102 with appropriate information to cease using the IPv6 prefix associated with PSA 1176 c and to direct the UE to use the IPv6 prefix associated with PSA 2176 b. SMF 174 may then update RAN 105 and PSA 2176 b with the information to communicate with each other (steps S806 and S810). Finally, SMF 174 may release BP or UL CL UPF 176a and PSA 1176 c (steps S812 and S814).

Group tunneling in 5G vertical LANs

TR 23.734(3GPP TS 23.734, Study on enhancement of 5GS for Vertical and LAN Services, V16.0.0(2018-12)) included herein by reference is a Study to enhance 5GS with Vertical and LAN Services, and one of the key issues relates to support of 5GLAN group management. The study concluded that scenario #29 should be the basis for normative work to support 5GLAN group management. The scheme is divided into two framework structures: 1) a centralized user plane architecture and 2) a distributed user plane architecture.

For a centralized user plane architecture, this study shows that a single SMF and a single PSA UPF are responsible for all PDU sessions in a 5GLAN group communication. The SMF will manage all PDU sessions of the 5GLAN group and the PSA UPF will enforce QoS for all UE traffic in the 5GLAN group. Similarly, for a distributed user plane architecture, a single SMF will manage PDU sessions for 5GLAN group communications. However, one or more UPFs are used to terminate these PDU sessions and an Nx interface is introduced to provide 5GLAN group traffic within the Nx tunnel.

Integrated Access and Backhaul (IAB) for NR

TR 38.874(3GPP TR 38.874, studio on Integrated Access and Backhaul, V16.0.0(2018-12)) incorporated herein by reference provides a Study on the relevant aspects of relaying Access traffic between Access and Backhaul links. In this study, as shown in fig. 9, relay service is provided to UEs using Integrated Access and Backhaul (IAB) nodes. The IAB donor node provides final access to the core network after receiving aggregated data from other IAB nodes. It is important to note that all IAB nodes in fig. 9 are the nodebs nodes used in the backhaul, and that the data pipes that the backhaul provides for routing data independently are PDU sessions, i.e., each PDU session will be handled with their respective QoS. In practice, the communication between the UE102 and the IAB node and between the IAB nodes is NR-Uu, as shown in FIG. 10.

Various architectures were proposed in this study and the architecture shown in fig. 10 was chosen as the basis for the normative work. In this architecture, an adaptation layer is added in layer 2 to assist in routing data from the UE102 to the core network 106 through the IAB backhaul and to ensure that the QoS per PDU session is met. Many variations of the adaptation layer presence in the user plane were identified during the study and further work was planned to recommend a single solution in the different examples shown in fig. 11A, 11B, 11C, 11D and 11E.

In the IAB architecture, IAB nodes relay UE traffic over backhaul links created between the IAB nodes and the core network. These backhaul links serve as data "pipes" where traffic from each UE is relayed independently to the core network 106 because the IAB nodes have no visibility into the PDU session that conveys data from the UE102 to the core network 106. There is no data aggregation by the IAB node that is a factor for buffering delay tolerant data.

Problems addressed by embodiments

Use case

TR22.836 (3GPP TR22.836, Study on Asset Tracking Use Cases, V0.1.0(2018-11)) incorporated herein by reference describes a Use case where an organization owns assets that must be tracked, and the assets can be transported by aircraft, ship, truck or rail. The asset may be transported worldwide and may have multiple ownership changes throughout its transport. Tracking assets may involve more than just traditional location tracking, as the assets themselves may require refrigeration or other environmentally stable conditions (e.g., humidity or light levels). Other monitoring events such as shock detection and container door opening may also be required to detect potential theft of the asset. As a result, improving container tracking and supply chain management is an imperative to achieve global economy (TR 22.836).

Within each container, the UE may collect data from multiple sensors within the container to provide tracking of the internal environment of the container. The UE may collect data from the sensors at set periodic intervals and provide status to the upstream relay UE depending on the level of service required (e.g., once per hour to twice per day). Note that the data in asset tracking use cases is typically small in size and exhibits some degree of latency tolerance. Thus, data can be buffered and relayed as long as the required service level is met. The data may then be aggregated together by the relay UEs and periodically sent to the management tracking system through the core network. Due to the mounting on the container, the UE must be battery powered and may need to have the same service life as the container (which may be as long as 12-15 years) without having to replace its battery. The supervisory tracking system may also request that monitoring of the container be obtained throughout the transit.

For asset tracking of containers using UE Relays, fig. 12 represents a use case from TR 22.866(3GPP TR 22.866, enhanced Relays for Energy Efficiency and extensional Coverage, V0.1.0(2018-11)) incorporated herein by reference. The gateway devices 1202 with UE communication modules on board the vessel can relay data received from the various containers 1204 back to the management tracking system via the core network through satellite communications 1206. Each container may collect data from various sensors within the container and be able to send all of its data back to the administration tracking system, even if it does not have connectivity to the core network 106. The containers may be stacked on top of each other and may even be located on the lower deck of the vessel, which may interfere with signal transmission and reception. As a result, the UE farthest from gateway apparatus 1202 in the ship may need assistance of other intermediate UEs to relay its data to gateway apparatus 1202. Gateway device 1202 may host a UE that will then aggregate the data received from all containers and send the aggregated data to core network 106 for routing to the appropriate administrative tracking system.

The UE in the gateway acts as a header for the relay link to interface with the core network. All UEs communicating with each other are then part of the relay link. The data may originate from the UE furthest away in the link and be sent to an upstream UE, which may even include its own data, and will then relay the aggregated data to the next UE in the relay link. This process is repeated until the aggregated data reaches the relay UE acting as a gateway device, which then sends all the data to the core network for routing to the appropriate tracking management system in the DN 202. Note that the relay link may be composed of more than two stages as shown in fig. 12.

Note that the above-described use cases and their requirements may also be applied to other use cases. For example, a locomotive car tracking and fleet monitoring use case may have many of the same requirements and problems faced by a container use case. A typical freight train may have 50-120 cars each 40 feet long and traveling at 50-80 km/h. Each car may have a UE installed to collect data from various sensors within the car, and may need to provide the data to a tracking management system in the cloud. The cars at the end of the train may be 700 m-1 km from the front of the train with intermittent connectivity to the core network. As a result, it may require assistance from UEs in other cars to relay its data.

Remotely connected healthcare monitoring is another use case with similar requirements as the container and boxcar tracking use case. Patients live in rural areas where cellular coverage is sparse and may require remote monitoring. The patient is not in a critical state and thus the data is delay tolerant, e.g. a delay of a few hours is not a problem. Battery powered body sensors and home monitoring devices may be utilized to provide data that needs to be relayed to a hospital system that provides care to a patient.

Problems addressed by embodiments

The above example demonstrates the importance of supporting asset tracking in a 5G system. UEs attached to containers are expected to provide global coverage while optimizing battery consumption. As containers are transported in a wide variety of, sometimes harsh, environments, maintaining connectivity to the core network throughout its journey can be a challenge for the UE. As a result, the UE may need to rely on other UEs to provide relay service back to the core network in order to provide proper tracking of the subject asset.

When the UE operates in relay mode, two main problems arise: 1) how the UE aggregates data from other UEs while balancing the effective power consumption requirements, and 2) how the core network routes the aggregated data to and from the relay UE. The relay UE may need to wait a certain time to allow other UEs time to transmit its data. The relay UE also needs to be power efficient to minimize power consumption of its own battery. Thus, the relay UE must balance power consumption and wait for the UE to provide its data.

Once the data has been aggregated at the relay UE, the next issue is how the core network knows how to route the data from each UE to the appropriate PSA UPF. Existing data demultiplexing methods rely on tunneling methods. Using such methods in 5GC means we do not take full advantage of the capabilities already built into 5GC in release 15, such as the ability to dynamically insert UPF into the data path. In addition, when a container is being transported, the UE may need to join or leave the relay link if its path deviates. The impact of the change in relay link membership may require a change in user plane resources of the core network to provide proper routing of the aggregated data.

Similar to the case of relay UEs, the core network 106 may need to aggregate data received from various administrative tracking systems targeting different UEs in the relay link, aggregate the data and forward to the relay UE. The core network 106 may also need to wait a certain time to aggregate data and minimize the frequency of communicating with relay UEs to preserve the battery life of the relay UEs. The relay UE is then required to separate the data and relay the data to the appropriate downstream UE until the data reaches the destination UE.

Thus, the asset tracking use case requires that the UE be operated globally and reachable throughout its transit to provide tracking and management of the subject asset. These UEs can be attached to containers and have an expected service life of 10-15 years. The UE may be a device with a fixed battery, so that efficient power consumption is a critical requirement. Containers may not always have connectivity to the core network due to poor mobility and signal strength. As a result, asset tracking UEs may need to rely on other UEs to aggregate their data and relay to the core network to optimize power consumption.

The present disclosure sets forth the following mechanisms to meet the requirements of Asset Tracking UEs from TR22.836 (3GPP TR22.836, Study on Asset Tracking Use Cases, V0.1.0(2018-11)) and TR 22.866(3GPP TR 22.866, enhanced Relays for Energy Efficiency and extend Coverage, V0.1.0(2018-11)) included herein by reference:

the UE provides a capability indication to the core network to indicate support for relay aggregation. The core network then provides the relay UE with data aggregation parameters for aggregating and relaying the UE data.

The relay UE provides an indication to the core network to create a shared PDU session for sending aggregated data on the user plane. In turn, the core network configures the UPF and RAN nodes to allocate resources for tunneling data over the shared PDU session.

The relay UE determines whether to aggregate data from other UEs using the data aggregation parameter. The relay UE may also wait a specified aggregation period to allow other UEs to transmit their data.

The relay UE aggregates data from other UEs and sends them on a shared PDU session. The RAN node and the UPF route data on the shared PDU session to one or more PSA UPFs.

UPF supports aggregating DL data from one or more PSA UPFs and sending the aggregated data to relay UEs.

UEs within the relay link synchronize communications with each other to optimize power consumption.

Relaying the UE and RAN node to communicate with each other so that the RAN node efficiently allocates radio resources for the UE to send delay tolerant data to the core network.

The main scheme 1:

the core network 106 provides the UE102 with a relay aggregation policy:

UE102 informs core network 106 that it can aggregate and relay data from other UEs

The core network 106 checks the policy against possible parameters to provide to the UE102

Core network 106 returns policy for aggregating data to relay UE

The supporting concept of main scheme 1 for relaying a user holding a policy and performing aggregation of data using information from the policy:

1. the core network 106 may be informed by a UE registration request including a relay aggregation or data aggregation indication

2. The aggregation policy consists of a traffic group ID and a delay tolerant data period parameter

UE102 internally saves policies as URSP rules for aggregation and relay decisions

UE102 receives data from another UE with a traffic group ID and makes an aggregation decision if it matches an internally stored traffic group ID

The UE102 waits for data from other UEs using the delay tolerant data period parameter.

The main scheme 2:

the UE102 establishes a shared PDU session to send aggregated data to the core network 106:

UE102 establishes a PDU session with a primary PDU session ID and one or more secondary PDU session IDs, and includes a shared PDU session indication

Core network 106 establishes routing rules and provides the routing rules to the relay aggregation UPF

Core network 106 reconfigures the tunnel information for each PSA UPF associated with the secondary PDU session ID

The core network 106 informs the RAN node 105 that the shared PDU session contains delay tolerant data

The core network informs the UE102 that a shared PDU session was created.

Supporting concepts of main scheme 2:

PDU session establishment may include a shared PDU session indication and a traffic group identifier

2. Secondary PDU session ID may consist of a list of previously established PDU sessions from other UEs in the relay link

3. The network retrieves the PDU session context for each secondary PDU session ID and generates routing rules to be sent to the RA UPF

4. Relay UE organizes data according to PDU session ID

5. The network provides the PDU session ID and delay tolerant data period to the RAN node

The main scheme 3:

the UE102 informs the RAN node 105 that it has delay tolerant data:

UE102 providing an indication of the availability of delay tolerant data to RAN node 105

After the RACH procedure is completed, the UE102 waits for a paging message from the RAN node 105

RAN node 105 pages UE102 to receive delay tolerant data

UE102 sends aggregated data to RAN node 105

Supporting concepts of main scheme 3:

UE102 providing a delay tolerance indication in an estabilishment cause information element of an RRCSetuprequest message

To address the relay and data aggregation issues with asset tracking use case identification, several improvements are proposed to support the processing of aggregated data from relay UEs. First, the core network 106 may provide the relay UE with a policy containing aggregated traffic parameters (such as the newly proposed traffic group ID and delay tolerant data period parameters). The core network 106 may provide these policies to the NAS layer of the UE102 in response to the UE102 including a relay aggregation or data aggregation indication in the registration request. Using the new parameters, the NAS layer of UE102 can decide whether to relay data from other UEs and the time to wait for such data.

After receiving the new parameters, the NAS layer of the UE102 may establish a "shared PDU session" with the core network 106. The shared PDU session is a special PDU session having a primary PDU session ID and one or more secondary PDU session IDs. The shared PDU session contains data that has been aggregated as described herein from multiple PDU session IDs that represent the one or more secondary PDU session IDs. The UE102 can include a list of a primary PDU session ID for the shared PDU session, a shared PDU session indication, a traffic group ID, and secondary PDU session IDs for other UEs in the PDU session setup request, wherein the secondary PDU session ID provided is associated with aggregated data on the shared PDU session. These secondary PDU session IDs and traffic group IDs may be included with the aggregated data to inform the Relay Aggregation (RA) UPF how to route data from various UEs. These secondary PDU session IDs may have been received by the relay UE from other UEs via PC5 signaling.

The SMF 174 will use the secondary PDU session ID to determine the UPF anchor points 176b, 176c, etc. identified by each PDU session ID. SMF 174 will then perform a UPF selection procedure to select RA UPFs for data of PDU session IDs to be used for deaggregating and aggregating UL and DL traffic, respectively. The SMF 174 will provide the secondary PDU session ID, traffic group ID and delay tolerant data period to the selected RA UPF. The SMF 174 will also configure the RA UPF and PSA UPF with tunneling information so that data can be tunneled between the RA UPF and PSA UPF. The RA UPF will in turn use the secondary PDU session ID, traffic group ID and delay tolerant data period in making UL and DL traffic routing decisions. For UL traffic routing, the UPF 176 may interface with multiple PSA UPFs by using the secondary PDU session ID and traffic group ID and the tunnel parameters that may be received from the SMF. For DL traffic routing, the RA UPF may aggregate data from potentially multiple PSA UPFs from different UEs using the delay tolerant data period and traffic group ID before sending the aggregated data to the relay UE via RAN node 105.

For the improvements disclosed herein, the following assumptions are made:

1. each UE will have established a PDU session

2. Traffic group ID and delay tolerant data period parameters have been established in PCF policies to indicate to relay and remote UEs and RA UPFs when and how to aggregate data

UE support communication with other UEs via PC5 or other communication medium

UE-to-UE communication results in the formation of a group of UEs, referred to as a relay link, that have data aggregated and relayed by relay UEs within the relay link. How to form a relay link is outside the scope of this disclosure

5. Due to the data being delay tolerant, it is assumed that the QoS requirements are similar or identical for all data flows in the relay link.

SMF, PCF, UPF, and UE support the shared PDU session concept proposed in this disclosure

The protocol stack of the user plane is presented in fig. 13 to represent an exemplary embodiment of the functionality to support asset tracking use cases. The functions are:

introduce an aggregation layer for remote and relay UEs to communicate through PC5 or other proximity communication mechanisms. The aggregation layer performs data aggregation and de-aggregation as required for asset tracking use cases and is further described in sections entitled "uplink UE relay data aggregation" and "downlink UPF relay data aggregation". Note that although the aggregation layer is shown as a separate layer, the required functionality may be added to an existing layer, such as the PDU layer or the SDAP layer shown in the figure.

The relay UE will also have an aggregation/PDU layer when interfacing with the core network. The aggregation/PDU layer performs not only the data aggregation function, but also shared PDU session management, as described in the section entitled "shared PDU session establishment procedure".

It should be understood that one or more relay UEs may be interposed between the remote UE and the relay UE shown in figure 13 to support the multi-hop scenario.

It should also be understood that each relay UE may provide data to its own asset tracking application in addition to the data aggregation functions it performs. Note that this function is not explicitly shown in fig. 13.

Introducing new functionality in RA UPF to provide de-aggregation and aggregation of UL and DL data traffic, respectively, as set forth in the sections entitled "uplink UE relay data aggregation" and "downlink UPF relay data aggregation". The RA UPF performs de-aggregation of data sent over the shared PDU session to route UL traffic to one or more PSA UPFs. In contrast, the RA UPF aggregates data received from one or more PSA UPFs for a duration specified by the delay tolerant data period and sends the aggregated data in a shared PDU session that is tunneled to the relay UE. The delay tolerant data period sent to the RA UPF may be the same or different from the delay tolerant data period parameter provided to the relay UE.

Other proposed functions not shown in fig. 13 are PDU session related information provided by the policy in the PCF (section entitled "PDU session related policy information in PCF"), propagation of aggregation policy to the UE in the relay link (section entitled "propagation of aggregation policy from CN to UE"), UE communication within the relay link that optimizes power consumption (section entitled "uplink UE relay data aggregation"), and relay UE interaction with the RAN node (section entitled "UE informs RAN of availability of delay tolerant data for transmission").

PDU session related policy information in PCF

To support the asset tracking use case, it is proposed to add some information elements in the PDU session related policy information in PCF 184. The new information elements are shown in table 4. Note that these new information elements are additions to the information elements shown in table 1. SMF 174 may use the information provided in the policy to configure the UE to support the data aggregation and relaying proposed herein. For asset tracking use cases, two parameters are proposed to make such use cases possible: traffic group ID and delay tolerance data period. Note that the term "traffic group ID" is used to refer to a group of UEs whose data is aggregated together and relayed to the core network as part of the relay link. Other terminology such as traffic link ID, relay group ID, etc. may also be applied. These two parameters are provided to the UE during UE registration or other procedures in which the core network can provide policies to the UE, as described below. Each UE in the relay link uses the parameters to make relay decisions, as described below. The traffic group ID is an identifier used by the UE to identify data that may need to be aggregated and relayed, while the delay tolerant data period provides the UE with time to wait for delay tolerant data from other UEs. At the expiration of this time, the relay UE may aggregate all received data and send them to the upstream relay UE or to the core network.

Table 4 new PDU session related policy information

Propagation of aggregation policy from UE to CN

As previously described, the core network 106 may provide the relay aggregation policy to the UE102 during the UE registration procedure. Fig. 14A, 14B and 14C show a registration procedure for a UE registering with a core network in order to be authorized to receive services from TS23.502 (3GPP TS23.502, Procedures for the 5G System; V15.4.1(2019-01)) included herein by reference.

Exemplary non-limiting improvements to the registration process are:

in step S1402, the relay UE102 may provide a relay aggregation indication in the registration request to inform the core network 106 that the UE102 supports relay aggregation. In addition, the remote UE may provide a data aggregation indication in the registration request to inform the core network 106 that the UE102 wishes to have its data aggregated by the relay UE. Either of the relay aggregation indication or the data aggregation indication may be used by the UE to be provided with an aggregation policy containing the parameters shown in table 4.

At step S1404, if the old AMF 172b holds information about the established PDU session, the old AMF 172b may include SMF information DNN, S-NSSAI, and PDU session ID. If there is a shared PDU session, a list of traffic group IDs and delay tolerant data period parameters and secondary PDU session IDs are also provided. The shared PDU session will be described later.

At step S1406, the new AMF 172a performs AM policy association establishment with the PCF 184 by sending Npcf _ ampolicocontrol _ Create to the PCF 184. As part of this request, the new AMF 172a includes a relay aggregation indication or a data aggregation indication. In response, PCF 184 provides access and mobility related policy information. A portion of the policy information includes a traffic group ID and delay tolerant data period parameters that may be returned to the UE 102.

At step S1408, the new AMF 172a returns a registration accept message to the UE102, and may include the traffic group ID and the delay tolerant data period in the message. In the URSP rule in the NAS-PCF message, the traffic group ID and the delay tolerant data period are transmitted to the NAS layer of the UE 102.

As previously described, the UE102 uses the UE routing policy (URSP) it maintains to determine how to send data over the PDU session. The inclusion of the traffic group ID and the delay tolerant data period in the URSP rule enables the UE102 to allocate traffic to be sent on a PDU session. Within the routing descriptor of the URSP rule, the "shared PDU session" indication may be saved in "PDU session type select", "access type preference", or some other information element. This indication may be used to direct the relay UE to route its own application data for asset tracking, as well as to aggregate application data from other UEs obtained from UE-to-UE communications (e.g., communications through PC 5). Traffic assigned to a traffic group ID may be sent over the aggregated PDU session.

There may be other ways to provide the traffic group ID and delay tolerant data period parameters to the UE. One such procedure is a UE configuration update procedure for transparent UE policy transfer as described in TS23.502 (3GPP TS23.502, Procedures for the 5G System; V15.4.1(2019-01)) incorporated herein by reference. Note that the aggregation policy may be part of the UE policy, and these terms will be used interchangeably hereinafter. This process may be initiated by the core network 106 due to an update by the tracking management system that adds or deletes the traffic group ID for relay UE monitoring, or adds or deletes a UE from the relay link. The process is performed as described in TS23.502 (3GPP TS23.502, Procedures for the 5G System; V15.4.1(2019-01)) incorporated herein by reference, and new aggregation policy data is sent to the UE during step 3.

Although the core network 106 may initiate the aggregation policy update as previously described, the UE102 may itself initiate the aggregation policy update due to changes in the operating environment of the UE. For example, the UE102 may arrive at a new location, such as a loading dock, and may need to request updates to its aggregation policy from the core network. Alternatively, UE102 may discover a new UE through PC5 and initiate an update to its aggregation policy to check whether data from the new UE should be aggregated. Fig. 15 shows a UE-triggered V2X policy configuration procedure from TS 23.287(3GPP TS 23.287, Architecture enhancements for 5G System (5GS) to support Vehicle-to-event (V2X) services, V0.2.0(2019-03)) incorporated herein by reference, which may be applied to cause the UE to initiate the aggregation policy update procedure described previously. The UE in FIG. 15 may provide the traffic group ID and associated PDU session ID parameters to the UE policy container at step S1502 to initiate a UE configuration update procedure for transparent UE policy transfer as described in TS23.502 (3GPP TS23.502, Procedures for the 5G System; V15.4.1(2019-01)) incorporated herein by reference.

Shared PDU session establishment procedure

A shared or aggregated PDU session is proposed to allow a relay UE to inform the core network that this special PDU session is to be used for tunneling aggregated data. The shared PDU session may be associated with a list of secondary PDU session IDs for one or more UEs and used to combine aggregated data organized by each respective PDU session ID. It is assumed that each UE has previously established a PDU session with the core network and has a corresponding PDU session ID. Each UE then transmits data to be aggregated with its PDU session ID and traffic group ID to the relay UE using, for example, PC5 communication. The relay UE again checks the traffic group IDs and if matching with the internally stored traffic group IDs, aggregates the data and organizes them according to the PDU session ID. The relay UE may also associate a timestamp with the data it receives.

As previously mentioned, the shared PDU session is a special PDU session known only to the relay link terminating UE (which is in communication with the core network) and the relay aggregation UPF. Other UEs in the relay link need not be aware of the existence of the shared PDU session, they need only aggregate data from the other UEs and ensure that the data is associated with the corresponding PDU session ID. The relay link terminating UE may establish a shared PDU session to send the aggregated data for the relay link to the core network 106. Similarly, the RA UPF needs to know the shared PDU session to perform the necessary processing described below.

The RA UPF supporting this shared PDU session concept can do deaggregation of UL data to one or more PSA UPFs similar to UL CL UPF or BP UPF. However, unlike the UL CL UPF or BP UPF case, the establishment of the shared PDU session automatically selects a UPF that supports relay aggregation to manage the routing of UL and DL data associated with the shared PDU session to and from multiple PSA UPFs. The relay UE transmits a combined message including data of multiple PDU session IDs from other UEs. Within the combined message, the traffic group ID is inserted along with the aggregated data organized according to the PDU session ID to which the data belongs. The traffic group ID allows the RA UPF to determine to parse the combined message into individual PDU sessions and route them to the associated PSA UPF.

To support the shared PDU session concept, the session management procedure of TS23.502 (3GPP TS23.502, Procedures for the 5G System; V15.4.1(2019-01)) incorporated herein by reference is modified to specify a request for a shared PDU session function. Using fig. 3A and 3B as the basis for a modification to the PDU session establishment procedure, the following is a difference from this procedure in order to support a shared PDU session:

at step S302, the relay UE102 includes the shared PDU session indication in a PDU session setup request to the core network along with a list of secondary PDU session IDs that may be associated with the PDU session ID. The PDU session ID provided in the request may be referred to as the master PDU session ID and is used for the shared PDU session. The resulting tunnel created for the PDU session will be used to carry aggregated data from multiple UEs. The IDs of these PDU sessions may be found in a secondary PDU session ID list that contains the PDU session IDs of all UEs sharing the PDU session, including the PDU session ID of the relay UE, if the relay UE also provides data. In this case, the PDU session ID provided in the PDU session setup request will be used as the master PDU session ID for the shared PDU session. Alternatively, the master PDU session ID provided in the shared PDU session setup request may be used for dual purposes: 1) a session ID of the shared PDU session, and 2) a session ID of a PDU session associated with the relay UE in which the relay UE provides its own application data. In this alternative case, the secondary PDU session ID list will only contain the PDU session IDs of all UEs in the relay link except the PDU session ID of the relay UE.

At step S308, the SMF 174 may need to retrieve PDU session context information from the UDM 197 for each PDU session ID in the secondary PDU session ID list, if that information is not already available in the SMF. Similarly, if SM subscription data is not available, the SMF 174 may need to retrieve SM subscription data from the UDM 197 and subscribe to obtain notifications about changes to the subscription data.

At step S310, if SMF 174 successfully obtains all context information for all PDU session IDs in the secondary PDU session ID list at step S308, SMF 174 returns one or more context IDs to AMF 172. If SMF 174 cannot obtain all the context information required for each PDU session ID, then SMF 174 may reject the request.

At steps S314a and 314b, SMF 174 may establish an SM policy association to obtain PCC rules from PCF 184 for the shared PDU session. SMF 174 provides the PDU session ID found in the secondary PDU session ID list in this process.

At step S316, the SMF 174 selects the UPF 176a that supports the shared PDU session function to provide the relay aggregation described herein.

At steps S320a and S320b, the SMF 174 establishes a N4 session establishment with the selected UPF 176a acting as a RA UPF. RA UPF is provided with data packet detection, enforcement, and reporting rules for each PDU session ID in the secondary PDU session ID list. If SMF 174 allocates CN tunnel information, it is provided to UPF 176 a. Note that this tunnel information corresponds to the N3 interface. SMF 174 then initiates N4 session modification for each PSA UPF associated with the PDU session ID found in the secondary PDU session ID list. This ensures that all data traffic from the PSA UPF is routed to the RA UPF.

At step S322, the SMF 174 sends the CN tunnel information to the RAN node together with other information found in the Namf _ Communication _ N1N2 MessageTransfer. Additionally, SMF 174 may provide RAN node 105 with some indication that the data for the shared PDU session is delay tolerant. The RAN node 105 may in turn use this indication when communicating with the relay UE as described below. Additionally, the aggregation policy may be provided to the UE102 at this time in a NAS message sent through the AMF. The aggregation policy may contain traffic group IDs and delay tolerant data period parameters that UE102 uses for data aggregation decisions for UE communications by PC5 or other UEs.

Note that PDU session modification Procedures from TS23.502 (3GPP TS23.502, Procedures for the 5G System; V15.4.1(2019-01)) incorporated herein by reference may also be used to modify a shared PDU session. The relay UE may add or remove a secondary PDU session ID from the shared PDU session each time a UE is added or removed from the relay link. In addition, the procedure may be used to modify a PDU session from a non-shared PDU session to a shared PDU session. The third scenario is that the relay UE may first establish a shared PDU session before any UE provides its data and update the shared PDU session by providing a secondary PDU session ID using a PDU session modification procedure.

One of the previously made assumptions is that all UEs in the relay link have established their own PDU sessions with the core network 106 before data aggregation starts. As a result, each UE102 proceeds with the PDU session setup procedure of fig. 3A and 3B, and as part of this procedure, tunneling information is provided to the UPF and RAN nodes 105 in order to properly route UL data transmitted from the UE102 and DL data transmitted to the UE102, respectively. An alternative concept to optimize this process is to introduce a "virtual PDU session" created by SMF 174 that minimizes signaling overhead when performing the PDU session establishment procedure.

The optimization may be for the UE102 to include a new indicator when establishing a PDU session with the core network 106. This indicator, referred to as a "virtual PDU session," will inform SMF 174 that UE102 supports sending delay tolerant data that may be relayed by another UE at some future time within the shared PDU session. In response, the SMF 174 may decide to omit certain signaling associated with the user plane configuration (in the UPF 176a and RAN node 105), such as steps S316, S320a, and S320 b. Additionally, SMF 174 may also limit the information provided to RAN node 105 in the messages of steps S322 and S324. The UE102 using this alternative will not be able to send uplink data over the user plane as shown in fig. 3A and 3B, but will send its data to the relay UE in the relay link for routing over the user plane.

Uplink UE relay data aggregation

Once the core network 106 provides the UE with the traffic group ID and the delay tolerant data period parameters, the relay UE may begin aggregating data from other UEs. The relay UEs may save these parameters to assist in future processing when they receive UE-to-UE communications (e.g., through PC 5). The relay UE may utilize the traffic group ID to determine whether to aggregate data received from other UEs. Other UEs in the relay link (both remote and relay UEs) may include the offered traffic group ID in the communication in which the data needs to be aggregated and relayed.

Since the data in the asset tracking use case is delay tolerant, delay tolerant data periods may be synchronized between UEs to limit the time to wake up each UE, thereby preserving battery life. This period may be interpreted as a period during which the relay UE will wait for data from other UEs in the relay link. Battery life may be preserved on a relay UE if UEs in the relay link synchronize the time periods to occur in close proximity to each other. An alternative is that the relay UE may need to stay awake longer in order to be able to receive data from other UEs, which will drain battery power faster in the relay UE.

One way to synchronize the occurrence of these periods on multiple UEs in the relay link is to have the relay UE return the time remaining in the period after receiving data from another UE. For example, the UE1 is transmitting data to the UE2 within the same relay link. The UE2 may return the time remaining in the delay tolerant data period to the UE1 so that the UE1 may synchronize its timer with the timer of the UE 2. This process is repeated and after all UEs in the relay link receive this information, the timers within the UEs in the relay link will be synchronized.

During the delay tolerant data period, if the offered traffic group IDs match, the relay UE may continue to receive data from other UEs and aggregate them appropriately. Included with the data and traffic group IDs are PDU session IDs for the respective UEs. The relay UE may use the PDU session IDs to organize the aggregated data. The relay UE may aggregate multiple data of the same PDU session ID if the UE transmits multiple data within the delay tolerant data period. This information is maintained within the relay link so that the RA UPF can correctly resolve the aggregated data.

When a relay UE receives data from another UE in the same relay link, it checks whether the provided traffic group ID is provided by the other UE. If there is a match with the internally stored traffic group ID, the relay UE buffers the data and starts a timer based on the delay tolerant data period if the timer has not run. If the timer has run, the relay UE aggregates the data according to the provided PDU session ID. The relay UE may also return the time remaining in the timer to notify the other UE when the delay tolerant data period will expire. Upon expiration of the timer, the relay UE may send the aggregated data to an upstream relay UE or to a core network. The aggregated data may include, for example, a traffic group ID and a timestamp of when the data was aggregated. Note that the individual UEs may also provide timestamp information in their data.

Another exemplary method for UEs to synchronize their delay timers is shown in fig. 16. The relay UE102 b (e.g., UE2) may notify the upstream relay UE102 c (e.g., UE3) that it has delay tolerant data to send at some time in the future after starting its timer. This communication may enable the uplink relay UE102 c to better schedule radio resources when communicating with downstream UEs. Further, the communication can also synchronize delay tolerant data periods between UEs in the relay link. Note that the above function may be performed at the previously proposed aggregation layer or at another layer where the function exists. It is further noted that although the method is used for delay tolerant data, it may also be applied to delay non-tolerant data to optimize radio resource usage in the system.

Step S1602: a timer is started within the relay UE2(102b) to enable aggregation of data from other downstream UEs.

Step S1604: the UE2102b informs the UE3102c that it will have delay tolerant data to send to the UE3102c at some time in the future. The UE2102b may now provide the remaining time on its timer and the PDU session ID of the UE for which it aggregated data to the UE3102c, if this information is known.

Step S1606: the UE3102c acknowledges to the UE2102b and may include a future time that is different from the time provided in step S1604 for a case in which the time in step S1604 may overlap with a scheduling budget for another UE.

Step S1608: at step S1608a, the UE3102c schedules radio resources for the UE2102b based on the time provided at step S1604 or step S1606. If a new timer value is provided at step S1606, the UE2102b adjusts its timer to reflect the difference in step S1608 b. If the timer value provided at step S1606 is less than the timer value provided from step S1604, the UE2102b may need to communicate the difference to a downstream UE. Note that if UE3102c has scheduled radio resources for all downstream UEs it is serving, UE3102c may sleep and wake up before requiring any scheduled radio resources to be available.

Step S1610: upon expiration of its internal timer, the UE2102b transmits aggregated data to the UE3102 c. Due to the scheduling at step S1608a, radio resources will already be available in the UE3102c to receive data.

Each data stored in the buffer of the relay UE is associated with a PDU session ID that identifies the PDU session to which the data belongs. If the relay UE receives multiple data associated with a certain PDU session ID, it will aggregate the new data with the old data under the same PDU session ID. The relay UE may buffer the data for each UE independently and associate the aggregated data with the PDU session ID for that UE. Once the timer expires, the relay UE forwards all data in its buffer to the next upstream relay UE or core network 106 along with the timestamp.

During UE-to-UE communication (e.g., via PC5), QoS may be provided in messages sent to the relay UE. This QoS applies to PC5 communications, but it also relates to the desired QoS for the PDU session ID associated with the message. The aggregation layer shown in fig. 13 may use this information received from the SDAP layer and derive the appropriate QoS flow id (qfi) to use when sending data to the core network. The relay UE may need to create multiple shared PDU sessions to support different QoS requirements received from the UE in the relay link, if necessary. In the simplified case where all UEs in the relay link have the same QoS, only one shared PDU session needs to be established.

When aggregated data is available for transmission to the core network, the data is packaged together and transmitted using the previously created shared PDU session ID. The relay UE ensures that the traffic group ID is contained in the aggregated data that is clearly delineated and classified by the corresponding PDU session ID. The data will be sent by the RAN node 105 to the RA UPF 176 for separation into the individual PDU session IDs, i.e. different PSA UPFs, associated with the subject UE.

Fig. 17 illustrates the operation of a relay UE aggregating UL data from other UEs in the relay link and sending the aggregated data to RA UPF 176 in a shared PDU session through RAN node 105.

Step 1702: a timer based on a delay tolerant data period parameter starts on all UEs that are part of the relay link. Each UE that is part of the relay link has been provided with a traffic group ID and a delay tolerant data period, and has performed the process of forming the relay link. In addition, UE3102c has established a shared PDU session with core network 106.

Step S1704: later, the UE 1102 a sends data to be aggregated to the UE2102 b. Similarly, UE 2120 b sends data received from UE 1102 a to UE3102c, as well as data from its own sensors. The UE3102c aggregates data from the UE 1102 a and the UE2102b, as well as data from its own internal sensors. Note that when transmitting data to upstream UEs, each UE includes a traffic group ID and its own PDU session ID, and each relay UE may add a timestamp when relaying the aggregated data.

Step S1706: once the timer expires, the UE3102c aggregates all data received from the UE in the relay link and sends them in the previously established shared PDU session. The UE3102c may include a traffic group ID, a timestamp, a shared PDU session ID, and aggregated data with a corresponding PDU session ID.

Step S1708: when UL data is received in the shared PDU session, the RA UPF 176 separates the UL data and forwards the individual data to different PSA UPFs 176a, 176b, 176c, according to the PDU session ID provided with the data from each UE.

Step S1710: the RA UPF 176 sends UL data to each of the corresponding PSA UPFs 176a, 176b, 176c based on Packet Detection Rules (PDRs) it receives at the establishment of the shared PDU session.

An alternative to using a shared PDU session as described herein is to have the relay UE send individual PDUs to the core network 106. The relay UE may send data associated with each PDU session ID on a corresponding PDU session separate from data of other PDU session IDs. In this case, the relay UE does not need to create a shared PDU session, but transmits data to the core network using PDU session IDs received from other UEs. Further, the relay UE may transmit data to the core network 106 using a combination of individual PDU sessions and shared PDU sessions.

Downlink UPF relay data aggregation

During shared PDU session establishment, RA UPF 176 is configured with PDU session IDs associated with all UEs covered by the relay link, as well as aggregation parameter traffic group IDs and delay tolerant data periods. Note that the delay tolerant data period provided to the RA UPF may be different from the same parameter provided to the UE. The value may be based on network configuration or policy. When DL data arrives from a different PSA UPF, RA UPF 176 may buffer the DL data and start a timer using a delay tolerant data period to wait for more data for the shared PDU session. This minimizes the frequency with which RA UPF 176 sends data to the relay UE. Similar to the relay UE, the RA UPF 176 organizes the DL data by PDU session ID of the DL data. Upon expiration of the timer, the RA UPF 176 forwards the aggregated data to the relay UE through the RAN node 105.

Fig. 18 represents the operation of the RA UPF 176 aggregating the DL data from the individual PSA UPFs 176a, 176b, 176c before transmitting the DL data to the relay UE. Fig. 18 assumes that the UE has been provided with the necessary aggregation information, that the relay link has been formed, and that a shared PDU session has been established to relay aggregated data from all UEs in the relay link.

Step S1802: the RA UPF 176 receives DL data from the various PSA UPFs 176a, 176b, 176c and aggregates the data based on the individual PDU session IDs.

Step S1804: the RA UPF 176 waits for the duration specified by the delay tolerant data period before transmitting the DL data to the UE3102 c. During this time, the RA UPF 176 continues to aggregate data from the various PSA UPFs 176a, 176b, 176 c.

Step S1806: the RA UPF 176 sends aggregated data organized by their respective PDU session IDs and including the traffic group ID to the UE3102c over the shared PDU session through the RAN node 105. Timestamps may also be added.

Step S1810: UE3102c separates the DL data and extracts the data for its own PDU session ID and repackages the remaining data for the other UEs according to their PDU session IDs.

Step S1812: the UE3102c sends the repackaged data to the UE2102b, the UE2102b performs similar functions as the UE3102c and forwards data destined for other UEs to the UE 1102 a. This process is repeated until DL data reaches all downstream UEs in the relay link.

UE informs RAN of availability of delay tolerant data for transmission

The relay UE may need to perform a RACH procedure to establish or reestablish an RRC connection with the RAN node 105 while aggregating data from other UEs. The relay UE can perform this procedure and indicate to the RAN node 105 that the UE has delay tolerant data. Further, the RAN node 105 can better utilize radio resources by paging the relay UE at some time in the future and dedicating the current radio resources to higher priority traffic.

Fig. 19 shows a procedure in which UE102 performs a RACH procedure with RAN node 105 and indicates that it has delay tolerant data. RAN node 105 may use the indication to optimize radio resource usage and decide to page UE102 for delay tolerant data at a future time. The procedure of the process is as follows.

Step S1902: the UE102 reads broadcast information (e.g., System Information Broadcast (SIB)) sent by the RAN node 105. UE102 receives an indication that RAN node 105 in the SIB supports the "mo-PageWhenReady" feature. This feature is further explained in step S1910.

Step S1904: when the process of data aggregation starts, a timer based on a delay tolerant data period is started within the relay UE.

Step S1906: the relay UE102 initiates the RACH procedure by transmitting a random access preamble. The RACH preamble may be specifically defined for use in the delay tolerant use case and if so, steps S1910 and S1912 may be skipped.

Step S1908: the RAN node 105 replies with a random access response. If a delay tolerant RACH preamble is defined and transmitted at step S1906, the RAN node 105 may return a code to indicate that the network will page the UE102 when it is the appropriate time to transmit data. The message may also include an indication of how long the UE102 should wait before reattempting if the UE102 does not receive a page. The message may also include a request reference number, and the process then proceeds to step S1914.

Step S1910: the relay UE102 may then send a RRCSetupRequest or rrcresumererequest message to the RAN node 105 to establish or resume an RRC connection depending on the RRC state of the UE. In this message, the relay UE 105 may include an indication in the estabilishment cause or resumeCause information element. The indication may be "mo-PageWhenReady". The "mo-pagewhiteready" indication tells the network that the network should page the UE102 when it is the appropriate time to send the data. Further, the UE102 may provide an indication of how long the UE102 is willing to be paged.

Step S1912: RAN node 105 returns a RRCSetup or RRCReshume message. The network may indicate that the request is denied or accepted. The reject or accept message may have a reason code or indication that indicates that the network will page the UE102 when it is a suitable time to send data. The message may also include an indication of how long the UE102 should wait before reattempting if the UE102 does not receive a page. The message may also include a request reference number.

Step S1914: instead of sending the RRCSetupComplete or RRCResumComplete message, the UE102 waits for the timer to expire and continues to aggregate data because the network has indicated that it will page the UE102 for data at a later time.

Step S1916: after a period of time, the RAN node 105 pages the UE102 for delay tolerant data. The RAN node 105 has flexibility in terms of when to send paging messages depending on radio resource availability and may optimize radio resource usage in combination with the fact that data is delay tolerant. Note that at step S1914, the RAN node 105 may page the relay UE102 before the timer expires. The relay UE102 may then send its aggregated data to the RAN node 105. The paging message sent to the UE102 may include an indication that the UE102 is being paged for MO data. The paging message may also include a reference number that is also provided to the UE102 at step S1908 or step S1912 so that the UE102 may associate the paging with the original request for MO data.

Step S1918: the relay UE102 performs an RACH procedure, establishes an RRC connection, and sends a service request. The RRC connection request and the service request may include an indication that the request is a response to paging for MO data. The indication that the request is a response to a page for MO data may cause the network to process the request with a higher priority because the UE102 was previously delayed or backed off.

Step S1920: the UE102 then transmits the aggregated data to the RAN node 105.

Graphic user interface

Fig. 20 shows an example of a user interface for a relay aggregation option that may be configured on the UE 102. Data for some parameters may be pre-provisioned to the UE102, the parameters may be obtained from the core network 106 when the UE policy containing the aggregated parameters is configured or updated, or the user may even rewrite the parameters as needed.

In an example embodiment, the electronic device is a UE102, such as a relay UE that is the topmost UE in the relay link. However, the UE102 may be located anywhere in the relay link. In an example embodiment, the circuitry of an electronic device (e.g., UE 102) may include at least one or more processor devices (e.g., CPUs). In an example embodiment, the circuitry may also include one or more memories storing computer-executable instructions. In an example embodiment, the circuit may include one or more of the components shown in FIG. 1F.

In an example embodiment, an electronic device (e.g., UE 102) receives data from other electronic devices (e.g., other UEs, etc.).

In an example embodiment, an electronic device (e.g., UE102, such as relay UE3102 c) includes circuitry (e.g., one or more components of fig. 1F) configured to send a request to a core network 106, 107, 109 (e.g., to one or more nodes in the core network, such as AMF 172). The request may inform the core network that the electronic device is capable of aggregating and relaying data from other electronic devices (e.g., the remote UE 1102 a, the relay UE2102b, etc.). The circuitry is also configured to receive one or more aggregation policies from a core network. The one or more aggregation policies may provide parameters that instruct the electronic device how to aggregate and relay data. The circuitry is further configured to receive data from the other electronic device; aggregating data from other electronic devices based on the parameters; and sending the aggregated data to the core network 106, 107, 109.

In an example embodiment, circuitry of the electronic device is configured to establish a shared Protocol Data Unit (PDU) session such that the electronic device is capable of sending aggregated data to a core network. The circuitry is also configured to receive a response from the core network that the shared PDU session has been created.

In an example embodiment, the one or more aggregation policies include, for example, at least a traffic group ID parameter and a parameter indicating a delay tolerant data period.

In an example embodiment, the data received from one of the other electronic devices may include, for example, a traffic group ID, and the electronic device will aggregate the received data if the traffic group ID matches the traffic group ID parameters of the one or more aggregation policies.

In an exemplary embodiment, the data received from one of the other electronic devices may include, for example, a traffic group ID and a PDU session ID.

In an example embodiment, the circuitry of the electronic device is configured to set an amount of time to wait for data from other electronic devices based on the delay tolerant data period.

In an example embodiment, a non-access stratum (NAS) layer of the electronic device may set an amount of time to wait for data from other electronic devices.

In an example embodiment, circuitry of the electronic device is configured to establish a shared PDU session with a primary PDU session ID and one or more secondary PDU session IDs.

In an example embodiment, the circuitry of the electronic device is configured to inform a Radio Access Network (RAN) node (e.g., RAN 105) that delay tolerant data will be sent to the RAN node 105 in the future, enabling the RAN node 105 to optimize allocation of radio resources.

In an example embodiment, the circuitry of the electronic device is configured to receive a page from the RAN node 105 to inform the electronic device that the RAN node 105 is ready to receive aggregated data.

In an example embodiment, the circuitry of the electronic device is configured to relay aggregated data to the core network through the RAN node 105 and a relay aggregation user plane function (RA UPF) 176. See fig. 17.

In an exemplary embodiment, RA UPF 176 receives the aggregated data, separates the aggregated data, and forwards individual ones of the separated aggregated data to different PDU session anchor Points (PSA) UPFs (e.g., 176a, 176b, 176c) in accordance with individual PDU session IDs provided with the data from each of the other electronic devices. See fig. 17.

In an example embodiment, circuitry of the electronic device is configured to coordinate with other electronic devices to synchronize delay budgets for transmitting data to optimize battery consumption of the electronic device and the other electronic devices.

In an example embodiment, the electronic device is a user equipment functioning as a relay node device.

In an example embodiment, the electronic device is configured to receive data from other electronic devices via PC5 communication.

In an example embodiment, the electronic device is configured to receive a broadcast from the RAN node 105 indicating support for receiving delay tolerant data and paging the electronic device when ready to receive delay tolerant data.

In an example embodiment, the request may include a relay aggregation indication or a data aggregation indication.

In an example embodiment, the electronic device adds a time stamp to data received from other electronic devices.

In an example embodiment, a method is performed by an electronic device. The method may include: sending a request to a core network, wherein the request informs the core network that the electronic device is capable of aggregating and relaying data from other electronic devices. The method may include receiving one or more aggregation policies from a core network, wherein the one or more aggregation policies provide parameters that instruct the electronic device how to aggregate and relay data. The method may include receiving data from other electronic devices. The method may include aggregating data from other electronic devices based on the parameters. The method may include transmitting aggregated data to a core network.

In an example embodiment, a method for a network node is performed. The method may include receiving a request from an electronic device. The request may inform the network node that the electronic device is capable of aggregating and relaying data from other electronic devices. The method may include sending one or more aggregation policies to the electronic device. The one or more aggregation policies may provide parameters that instruct the electronic device how to aggregate and relay data. The one or more aggregation policies may include at least a traffic group ID parameter and a parameter indicating a delay tolerant data period.

In the exemplary embodiment, the network node is an access and mobility management function (AMF) 172.

In the example embodiment, the network nodes are an access and mobility management function (AMF)172 and one or more other nodes or components (e.g., one or more of the network nodes shown in fig. 1B, 1C, 1D, and 2). In an example embodiment, the network node is one or more network nodes (e.g., one or more of the network nodes shown in fig. 1B, 1C, 1D, and 2).

In an example embodiment, the method may include receiving aggregated data. The received aggregated data may be data that has been aggregated from other electronic devices based on the provided parameters. The method may include any other steps described herein.

In an example embodiment, a non-transitory computer-readable medium (e.g., one or more of non-removable memory 130 and removable memory 132) includes computer-executable instructions that, when executed by circuitry (e.g., processor 118) of an electronic device (e.g., UE 102), cause the electronic device to: sending a request to a core network, wherein the request informs the core network that the electronic device is capable of aggregating and relaying data from other electronic devices; receiving one or more aggregation policies from a core network, wherein the one or more aggregation policies provide parameters that instruct the electronic device how to aggregate and relay data; receiving data from other electronic devices; aggregating data from other electronic devices based on the parameters; and sending the aggregated data to the core network.

It should be appreciated that any of the methods and processes described herein may be embodied in the form of computer-executable instructions (i.e., program code) stored on a computer-readable storage medium, which when executed by a machine, such as a computer, a server, an M2M terminal device, an M2M gateway device, etc., performs and/or implements the systems, methods, and processes described herein. In particular, any of the steps, operations, or functions described above may be implemented in the form of such computer-executable instructions. Computer-readable storage media include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, although such computer-readable storage media do not include signals. Computer-readable storage media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other physical medium which can be used to store the desired information and which can be accessed by a computer.

In describing the preferred embodiments of the presently disclosed subject matter as illustrated in the drawings, specific terminology is employed for the sake of clarity. However, the claimed subject matter is not intended to be limited to the specific terminology so selected, and it is to be understood that each specific element includes all technical equivalents that operate in a similar manner to accomplish a similar purpose.

Thus, it will be appreciated by those skilled in the art that the disclosed systems and methods may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The presently disclosed embodiments are therefore considered in all respects to be illustrative and not restrictive. It is not intended to be exhaustive or to limit the disclosure to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the disclosure without departing from the breadth or scope of the disclosure. Thus, although specific configurations are discussed herein, other configurations may be employed. The present disclosure enables numerous modifications and other embodiments (e.g., combinations, rearrangements, etc.) that are within the scope of one of ordinary skill in the art and are deemed to fall within the scope of the disclosed subject matter and any equivalents thereof. Features of the disclosed embodiments may be combined, rearranged, omitted, etc., within the scope of the invention to produce additional embodiments. Moreover, some features may sometimes be used to advantage without the corresponding use of other features. Accordingly, applicants intend to embrace all such alternatives, modifications, equivalents, and variations that are within the spirit and scope of the disclosed subject matter.

Reference to a singular element does not mean "one and only one" unless explicitly so stated, but rather "one or more. Furthermore, where a phrase similar to "A, B or at least one of C" is used in the claims, it is intended that the phrase be interpreted to mean that a may exist alone in an embodiment, B may exist alone in an embodiment, C may exist alone in an embodiment, or any combination of elements A, B and C may exist in an embodiment; for example, a and B, A and C, B and C, or a and B and C.

Unless the phrase "means for …" is used to specifically refer to a claim element herein, that claim element should not be construed as specified in 35 u.s.c.112 (f). As used herein, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. The scope of the invention is indicated by the appended claims rather than by the foregoing description, and all changes that come within the meaning and range of equivalents are intended to be embraced therein.

78页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种通信方法及设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类