Technique for enhancing UDP network protocol to efficiently transmit large data units

文档序号:1834652 发布日期:2021-11-12 浏览:2次 中文

阅读说明:本技术 增强udp网络协议以有效传输大型数据单元的技术 (Technique for enhancing UDP network protocol to efficiently transmit large data units ) 是由 A·杰哈 S·拉马钱德兰 于 2020-08-28 设计创作,主要内容包括:公开了增强UDP网络协议以有效传输大型数据单元的技术。用户数据报协议(UDP)是一种众所周知的协议,用于在网络的两个节点之间传输数据。当数据太大而无法容纳于可以在两个节点之间传输的单个UDP数据包中时,需要对数据进行分段并与多个数据包一起传输,然后在接收节点上重新组装。本文公开了例如用于从节点的中央处理单元(CPU)卸载这种分段、传输和重新组装的技术。例如,通过重新利用UDP传输中使用的旧协议字段(例如互联网协议(IP)标识(ID),生存时间(TTL),服务类型(TOS)和/或以太类型字段)对有效分段、乱序接收和重新组装所需的信息进行编码,可以有效地执行这种卸载。(Techniques are disclosed for enhancing the UDP network protocol to efficiently transport large data units. The User Datagram Protocol (UDP) is a well-known protocol for transmitting data between two nodes of a network. When data is too large to fit into a single UDP packet that can be transmitted between two nodes, the data needs to be fragmented and transmitted with multiple packets, and then reassembled at the receiving node. Techniques are disclosed herein for offloading such segmentation, transmission, and reassembly, e.g., from a Central Processing Unit (CPU) of a node. Such offloading may be efficiently performed, for example, by reusing old protocol fields used in UDP transport, such as Internet Protocol (IP) Identification (ID), time-to-live (TTL), type-of-service (TOS), and/or ethertype fields, to encode information needed for efficient fragmentation, out-of-order reception, and reassembly.)

1. A method, comprising:

generating a User Datagram Protocol (UDP) payload including application data from an application;

generating a plurality of UDP packets, each UDP packet including a portion of the application data;

encoding an initial value into a first Internet Protocol (IP) header field of an initial packet of the plurality of UDP packets, encoding a final value into the first IP header field of a final packet of the plurality of UDP packets, and encoding an intermediate value into the first IP header field of each intermediate packet of the plurality of UDP packets between the initial packet and the final packet;

incrementing a value of a second IP header field for each of the plurality of UDP packets from the initial packet to the final packet; and

and transmitting the plurality of UDP data packets to a receiving device.

2. The method of claim 1, wherein the plurality of UDP packets, when received by the receiving device, cause the receiving device to reassemble the UDP payload using the plurality of UDP packets according to a sequence defined by the values of the second IP header field of each of the plurality of UDP packets.

3. The method of claim 1, wherein the first IP header field is at least one of a Time To Live (TTL) header field or a type of service (TOS) header field and the second header field is an IP Identification (ID) header field.

4. The method of claim 1, wherein the initial value instructs the receiving device to start aggregating packets and the final value instructs the receiving device to stop aggregating packets.

5. The method of claim 1, further comprising setting a do not slice (DF) bit to 1.

6. The method of claim 1 wherein the first IP header field is a time-to-live (TTL) header field and the initial value, the final value, and the intermediate value are encoded as the two most significant bits of the TTL header field.

7. The method of claim 1, wherein the generating a plurality of UDP packets comprises: a UDP Segmentation Offload (USO) operation is performed.

8. The method of claim 1, wherein the initial value, the final value, and the intermediate value are different from one another, and the initial value, the final value, and the intermediate value are selected from the group consisting of: (0,1), (1,0) and (1, 1).

9. A system, comprising:

user Datagram Protocol (UDP) hardware to generate a UDP payload including application data from an application;

internet Protocol (IP) hardware to:

generating a plurality of UDP packets, each UDP packet including a portion of the application data;

encoding a value into a time-to-live (TTL) field of each of the plurality of UDP packets to indicate that at least a first packet, a final packet, and intermediate packets between the first packet and the final packet of the plurality of UDP packets are aggregated by a receiving device; and

incrementing a value of an IP Identification (ID) field for each of the plurality of UDP packets from the first packet to the final packet; and

a driver for transmitting the plurality of UDP packets to the receiving device.

10. The system of claim 9, further comprising ethernet hardware to encode a custom ethertype into an ethertype field of each of the plurality of UDP packets to indicate to the receiving device that the plurality of UDP packets are to be aggregated.

11. The system of claim 9, wherein the driver is an ethernet driver.

12. The system of claim 9, wherein the plurality of UDP packets, when received by the receiving device, cause the receiving device to reassemble the UDP payload using the plurality of UDP packets according to a sequence defined by the value of the IPID field of each of the plurality of UDP packets.

13. The system of claim 9, wherein a first one of the values corresponding to the first packet instructs the receiving device to start aggregating packets and a second one of the values corresponding to the final packet instructs the receiving device to stop aggregating packets.

14. The system of claim 9, further comprising a do not fragment (DF) bit encoding a 1 into each of the plurality of UDP packets.

15. The system of claim 9 wherein the value encoded into the TTL field is encoded into the two most significant bits of the TTL header field.

16. A method, comprising:

performing a User Datagram Protocol (UDP) segmentation offload (USO) operation on a UDP data payload to generate a plurality of segmented data packets, each segmented data packet of the plurality of segmented data packets comprising:

a do not slice (DF) bit set to 1;

two most significant bits of a time-to-live (TTL) field set to one of an initial value, a final value, or an intermediate value; and

an Internet Protocol (IP) Identification (ID) field set to a value that sequentially increments from one segment packet of the plurality of segment packets to another; and

transmitting the plurality of segmented packets for processing by a receiving device.

17. The method of claim 16, wherein the initial value, the final value, and the intermediate value are each encoded into at least one of the plurality of segmented packets.

18. The method of claim 16, wherein the plurality of segmented packets, when received by the receiving device, cause the receiving device to reassemble the UDP data payload using the plurality of segmented packets according to a sequence defined by the value of the IP ID field of each of the plurality of segmented packets.

19. The method of claim 16, wherein the initial value instructs the receiving device to start aggregating packets and the final value instructs the receiving device to stop aggregating packets.

20. The method of claim 16, wherein a segment packet of the plurality of segment packets associated with the initial value comprises a minimum value of the IP ID field relative to each other segment packet of the plurality of segment packets.

Background

Packets (or datagrams) are typically processed by operating the network stack (network stack) of the operating system to run each packet, which processes each header in the packet in layers-e.g., an ethernet header, an Internet Protocol (IP) header, and a Transmission Control Protocol (TCP)/User Datagram Protocol (UDP) header. High bandwidth network interfaces may result in high Central Processing Unit (CPU) utilization due to the need to process each packet header, as the CPU processes a large number of packets. Thus, various solutions may be implemented to offload certain processing from the CPU to another hardware component or components, such as network Media Access Control (MAC) hardware. For example, segmentation offload (mainly for inherently ordered TCP connections) has been implemented so that the network stack running on the CPU only processes one large TCP payload, while the network MAC hardware is responsible for segmenting the larger TCP payload into the largest segment size (MSS) portion that can be accommodated in the largest transmission unit (MTU) of the underlying ethernet medium, while updating the TCP/IP header according to the performed segmentation.

However, for UDP traffic, segmentation offload is more challenging because there is no built-in or inherent ordering in the UDP payload. For example, in the case where an application sends a large UDP payload to the network stack, and the stack (executing on the CPU) adds a UDP header to the large payload and passes the payload to the network MAC hardware, the payload may not be able to run an accurate reconstruction. For example, the MAC hardware may fragment the UDP payload into MTU-sized frames, each carrying an IP/UDP header, and modify the appropriate fields according to the fragmentation. However, these individual MTU-sized frames may still be lost/routed out of order through the network during transmission and thus may not arrive at the receiver in the order of transmission. As a result, and because the UDP/IP header information does not include ordering information, the original UDP payload sent by the application to the network stack may not be reassembled (e.g., lost because the frames may be unknowingly) and/or because the received frames may be out of order and the correct order is unknown.

In this regard, some conventional systems require that each application using the network stack have full knowledge of the underlying offload capabilities and operations. For example, metadata may be encoded in the payload itself that helps the offload aware application to detect or order or poorly reassembled payloads. However, these operations require rewriting the application, and depending on how the sender's particular hardware splits the payload, the application may still be unable to accurately process the data.

Disclosure of Invention

Embodiments of the present disclosure relate to efficient techniques for sequence-aware User Datagram Protocol (UDP) segmentation offload (USO). Systems and methods are disclosed that reuse old header fields to render (render) UDP Segmentation Offload (USO) sequence awareness so that original payloads generated by applications and transmitted over a network can be reliably and accurately reassembled at a receiver.

In contrast to conventional systems as described above, old bit definitions in ethernet and/or Internet Protocol (IP) headers may be modified-and associated hardware may be configured to perform such modifications-to encode ordering information between frames of a larger UDP payload. Modifications may be encoded so that typical routing and forwarding of individual frames of a larger UDP payload is not affected-thereby allowing the systems and methods described herein to be transparent to applications so that they can be considered (account for) or utilized USO functionality without having to be rewritten or updated. As a result, CPU utilization (e.g., in automotive systems, storage networks (e.g., Network Attached Storage (NAS)), etc.) for processing high bandwidth UDP traffic may be reduced to improve overall system performance while maintaining the same system semantic state.

Drawings

Systems and methods for an efficient technique for sequence-aware User Datagram Protocol (UDP) segmentation offload (USO) are described in detail below with reference to the accompanying drawings, in which:

fig. 1 depicts a block diagram of a sequence-aware UDP networking system in accordance with an embodiment of the present disclosure;

2A-2D depict visualizations of generating Maximum Transmission Unit (MTU) size packets in a sequence-aware UDP networking system according to embodiments of the present disclosure;

fig. 3 includes a flow diagram representing a method for sequence-aware UDP networking in accordance with an embodiment of the present disclosure; and

FIG. 4 is a block diagram of an example computing device suitable for use in implementing some embodiments of the present disclosure.

Detailed Description

Systems and methods related to efficient techniques for sequence-aware User Datagram Protocol (UDP) segmentation offload (USO) are disclosed. Although primarily described herein with respect to automotive and data center applications, this is for exemplary purposes only, and the systems and methods described herein may be implemented in any UDP-based network solution. Additionally, although the use of ethernet is primarily described herein as being used at the data link layer, this is not intended to be limiting, and the systems and methods of the present invention may additionally or alternatively implement Serial Line Internet Protocol (SLIP), point-to-point protocol (PPP), and/or other protocol types at the data link layer of a UDP network stack.

The systems and methods described herein may be implemented such that they are transparent to the application. For example, the application does not have to be aware that UDP ordering and USO are occurring. As such, because the network stack may be configured to process ordered frames and account for out-of-order and/or lost frames, the application may not be aware of the (oblivious) ordering encoded in the received frames. Additionally, the systems and methods described herein may be backward compatible with existing network architectures such that if a fragmented and sequenced UDP packet is received, but the receiving hardware is not configured for such fragmentation, the receiving hardware may still process the packet as a normal UDP packet. As a result, both are configured for communication between the ordered receivers and senders described herein and those that are not configured-e.g., where the receiver is not configured and the sender is configured for USO-the transmission and reception of UDP packets may still occur seamlessly. Additionally, although the present systems and methods are applicable to USOs, implementation of USOs allows existing transmission control protocol segmentation offload (TSO) hardware to be reused-with minor adjustments in embodiments-to perform USOs as described herein.

Referring now to fig. 1, fig. 1 depicts an example sequence-aware UDP networking system 100 (alternatively referred to herein as "system 100") in accordance with some embodiments of the present disclosure. It should be understood that this and other arrangements described herein are set forth only as examples. Other arrangements and elements (e.g., machines, interfaces, functions, commands, groupings of functions, etc.) can be used in addition to or instead of those shown, and some elements may be omitted altogether. Further, many of the elements described herein are functional entities that may be implemented as discrete or distributed components or in conjunction with other components and in any suitable combination and location. Various functions described herein as being performed by an entity may be carried out by hardware, firmware, and/or software. For example, various functions may be performed by a processor executing instructions stored in a memory. In some embodiments, the system 100 (e.g., the transmitter 102, the receiver 104, etc.) may include some or all of the components of the example computing device 400 of fig. 4, or may include additional or alternative components to those of the example computing device 400.

System 100 may include a transmitter 102 and a receiver 104 communicatively coupled via one or more networks 106 (e.g., without limitation, a Local Area Network (LAN), a Wide Area Network (WAN), a Low Power Wide Area Network (LPWAN), a Personal Area Network (PAN), a Storage Area Network (SAN), a System Area Network (SAN), a Metropolitan Area Network (MAN), a Campus Area Network (CAN), a Virtual Private Network (VPN), an Enterprise Private Network (EPN), and/or a passive fiber optic local area network (POLAN)). The connection of the transmitter 102 and the receiver 104 over the one or more networks 106 may include a wired connection, a wireless connection, or a combination thereof.

Transmitter 102 and receiver 104 may each comprise a physical device or component within system 100 and/or may each represent a separate system comprising any number of devices and/or components. For example, in some embodiments, each of the transmitter 102 and receiver 104 may comprise a component of a larger system, such as a component of an automobile that communicates over an ethernet connection, or a component of a data store (e.g., a network store, such as a Network Attached Storage (NAS)). In other embodiments, the transmitter 102 may correspond to a transmitting device of a first system and the receiver 104 may correspond to a receiving device of a second system. As such, the transmitter 102 and receiver 104 may be co-located and may communicate over a local (wired and/or wireless) connection and/or may be remotely located with respect to one another (e.g., a client node and a host node, such as in a distributed or cloud computing environment). The transmitter 102 and receiver 104 may thus include any two components, devices, and/or systems that may be capable of communicating using UDP. Although referred to as a transmitter and a receiver, this is not intended to be limiting and transmitter 102 may be a receiver when receiving data and receiver 104 may be a transmitter when transmitting data. However, for clarity, the transmitter 102 and receiver 104 are labeled as such.

The transmitter 102 and receiver 104 may include one or more processors 108 and 120, respectively, such as one or more Central Processing Units (CPUs) and/or Graphics Processing Units (GPUs). In some embodiments, to reduce the processing burden on one or more processors 108 and/or 120, USO may be performed, for example, by utilizing ethernet Media Access Control (MAC) hardware 118, 130, ethernet hardware 116, 128, IP hardware 114, 126, and/or UDP hardware 112. In some embodiments, one or more hardware components described herein may correspond to a Network Interface Card (NIC).

Applications 110, 122 of sender 102 and receiver 104, respectively, may include any application that generates application data for transmission over one or more networks 106. For example, the one or more applications 110, 122 may include, but are not limited to, a tunneling (e.g., VPN tunneling) application, a media streaming application, a game or game streaming application, a local broadcast mechanism application, an application within a NAS operating system and/or an application executing within an automotive system-e.g., a non-autonomous automotive system, a semi-autonomous automotive system (e.g., Advanced Driver Assistance System (ADAS)) and/or an autonomous automotive system-for communicating across an ethernet network, a Controller Area Network (CAN) over an ethernet network, and so forth. The application 110, 122 may generate and/or decode application data 202 (fig. 2A-2D) and/or application layer headers for the UDP packet.

The UDP hardware 112, 124 of the transmitter 102 and receiver 104 may correspond to hardware configured to generate and/or decode a UDP header 204 (fig. 2A-2D) of a UDP packet, respectively. The UDP header 204 may include data (e.g., a header field) representing a source port (e.g., 16 bits), a destination port (e.g., 16 bits), a length (e.g., 16 bits), and a checksum (e.g., 16 bits), for a total of 64 bits or 8 bytes. However, depending on the embodiment and the system configuration of the transmitter 102 and receiver 104, the UDP header 204 may include additional and/or alternative header fields and/or more or fewer bits for each of the respective header fields.

The IP hardware 114, 126 of the sender 102 and receiver 104 may correspond to hardware configured to generate and/or decode an IP header 206 (fig. 2A-2D) and/or another type of network layer header (e.g., ARP, RARP, ICMP, IGMP, etc.), respectively, depending on the embodiment. IP header 206 may include data (e.g., header fields) that represent: a version of an IP protocol (e.g., IPv4, IPv6, etc.) (e.g., 4 bits), a header length (e.g., 4 bits), a priority and a type of service (TOS) (e.g., 8 bits, the first 3 bits for priority and the last 5 bits for TOS), an identification (e.g., 16 bits), a flag (e.g., 3 bits including no segment (DF) bits), a segment offset (e.g., 13 bits), a time-to-live (TTL) (e.g., 8 bits), a protocol (e.g., TCP or UDP) (e.g., 8 bits), a header checksum (e.g., 16 bits)), a source IP address (e.g., 32 bits), a destination IP address (e.g., 32 bits), and an option (e.g., 32 bits). However, depending on the embodiment and the system configuration of sender 102 and receiver 104, IP header 206 may include additional and/or alternative header fields and/or more or fewer bits for each of the respective header fields.

In some embodiments, as described in more detail herein with respect to fig. 2A-2D, the IP hardware 114 may be modified to encode the IP ID, TTL, DF, and/or TOS bits of the IP header 206 to indicate the start of the UDP sequence, the end of the UDP sequence, intermediate data packets from the UDP sequence, the ordering of received compressed packets, and/or other information regarding sequence-aware UDP transmissions.

The ethernet hardware 116, 128 of the transmitter 102 and receiver 104 may correspond to hardware configured to generate and/or decode an ethernet header 208 (fig. 2A-2D) and/or another type of data link layer header (e.g., PPP, HDLC, ADCCP, etc.), respectively, depending on the embodiment. The ethernet header 208 may include data (e.g., a header field) representing a preamble (e.g., 7 bytes), a start-of-frame delimiter (SFD) (e.g., 1 byte), a destination address (e.g., 6 bytes), a source address (e.g., 6 bytes), an ethertype (e.g., 2 bytes), a length (e.g., 2 bytes), data (e.g., the IP header 206, the application data 202 or a portion thereof, etc.) (e.g., 46-1500 bytes), and a Cyclic Redundancy Check (CRC) (e.g., 4 bytes). However, depending on the embodiment and the system configuration of the transmitter 102 and receiver 104, the ethernet header 208 may include additional and/or alternative header fields and/or more or fewer bits for each of the respective header fields.

In some embodiments, as described in more detail herein with respect to fig. 2C, ethernet hardware 116 may be modified to encode an ethertype portion of an ethernet header to indicate a different protocol type than the actual transmitted protocol type, and ethernet hardware 128 may be modified to decode the ethertype portion to process the received packet according to the actual ethertype of the packet.

The ethernet MAC hardware 118, 130 of the transmitter 102 and receiver, respectively, may correspond to hardware configured to interact with wired, wireless, and/or optical transmission media to transmit data packets of each Maximum Transmission Unit (MTU) size over the one or more networks 106. Thus, once the network stack (including the application layer, transport layer, network layer, and data link layer) sets the headers of MTU-sized packets corresponding to application data 202, ethernet MAC hardware 118 may control the transmission of MTU-sized packets through one or more networks 106. Similarly, the ethernet MAC hardware 130 may receive MTU-sized packets and determine to pass the MTU-sized packets up to the network stack of the receiver 104 for ingestion by the application 122.

System 100 may be configured to insert or encode sequence-aware information using old fields (e.g., of IP headers, ethernet headers, etc.) of UDP packets. As a result, each MTU-sized frame of the entire application payload may be encoded so that the receiver can identify which packets need to be reassembled, which packet indicates the end of the reassembly and how to handle out-of-order or lost MTU-sized frames. For example, when segmenting large UDP payloads via USO, the system may still determine when to begin re-framing to reconstruct the larger payload segmented by the transmitter 102 without controlling the ordering and/or sequence at the receiver 104. Once reassembly has begun, the receiver may also determine where the larger payload segment ends based on the decoded information from the frame (e.g., which packet or frame is the last packet or frame of the payload). In addition, because the payload may be broken down into any number of segmented frames (e.g., MTU-sized frames), out-of-order or lost frames may be identified or ordered according to decoding information, which may occur before the payload is passed to the application, such that the application need not know whether the packets were received and/or received in the proper order.

Referring now to fig. 2A, fig. 2A includes a visualization 200A of a process for encoding sequence awareness information into MTU-sized frames 210A-210N of a UDP payload. Although only three MTU-sized frames 210 are shown, this is not intended to be limiting and any number (N) of MTU-sized frames may be generated depending on the size of the payload and the size of the MTU. For example, application data 202 (e.g., user data, application headers, etc.) may be generated by application 110 of transmitter 102 and/or an application layer of a network stack, UDP hardware 112 corresponding to a transport layer of the network stack may append UDP headers 204 to application data 202, IP hardware 114 corresponding to a network layer of the network stack may append IP headers 206 to UDP headers 204, and ethernet hardware 116 corresponding to a data link layer may append ethernet headers 208 to IP headers 206. The payload may then be fragmented using USO to generate resulting MTU-sized frames 210A-210N, and each MTU-sized frame 210A-210N may be transmitted (e.g., using ethernet MAC hardware 118) over one or more networks 106 to receiver 104.

In order for the receiver 104 to understand that MTU-sized frames 210A-210N correspond to a larger payload of application data 202 that has been fragmented and sorted, the IP header 206 may be encoded by utilizing DF bits, TTL bits, and IP ID bits. For example, the DF bit may be set (e.g., to "1" instead of "0") to indicate that MTU-sized frames should not be fragmented (fragment). When receiver 104 is configured to reassemble payloads that have been fragmented using USO, receiver 104 may recognize that the received segments (fragments) (e.g., MTU-sized frames) are ordered based on the DF bit being set. In addition to DF being set, the IP ID field of IP header 206 may be set to a constant value for each section of the larger payload. For example, the IP hardware 114 may be configured to set the IP ID to the same IP ID for each segmented packet of a larger payload-e.g., the conventional IP ID field increments the IP ID field for each segment or new payload as opposed to the conventional IP ID field. As such, a DF bit that is set may indicate to the receiver 104 that the segment belongs to a larger overall payload, and an IP ID that is consistent across segments may indicate to the receiver 104 that the segment is part of the same larger payload. The receiver 104 may then continue to aggregate fragmented packets until a section or payload is received that has no DF bit set, does not have the same IP ID (e.g., within the same 5-tuple flow), and/or has a different MTU size than the section corresponding to the application data 202 (e.g., one or all of these events may indicate to the receiver 104 that the end of the reassembly has been reached). In some embodiments, the receiver 104 may have a modified General Receive Offload (GRO) stack so that the receiver 104 continues to aggregate packets.

To ensure proper ordering and/or sequencing of received fragmented packets identified as corresponding to a larger payload (e.g., using the DF bit, IP ID, and/or MTU size), the TTL and/or TOS fields of the IP header 206 may be utilized. For example, because the TTL field may need to be non-zero (e.g., "1" for P2P connections) for the recipient 104 to know that the packet is valid, the TTL field may be sequentially incremented (e.g., every number, every other number, every third number, etc.) using a value between 2 and 255 (e.g., since the TTL is an 8-bit field) as the section number of the segmented packet for the larger payload. By way of non-limiting example, the first packet may include a "2" encoded in the TTL field, the next packet may include a "3", and so on until the final segmented packet of the larger payload is generated. In some embodiments, the TOS field may be used to sequence and/or order the received segmented packets. For example, for bits of the unused TOS field (e.g., which do not overlap with the priority definition of the packet), the values of these bits may be incremented to correspond to the sequence or ordering of the segmented packet of the larger payload.

In addition, IP hardware 114 may update the total length, IP checksum, and/or other header fields of IP header 206 for each segmented packet, and UDP hardware 112 may update the UDP length, checksum, and/or other header fields of UDP header 204 for each segmented packet.

Using the design of fig. 2A, a maximum USO payload length of 372KB (e.g., 1460 × 255) may be supported. Additionally, such a design may allow receiver 104 to discard an incomplete series of USO segmented packets-e.g., in the case of receiving a sequence number such as 8 but never receiving sequence number 7, receiver 104 may know that the series is incomplete and may discard the packet. Because the design of fig. 2A is backward compatible, received packets that do not have the same IP ID, do not have the DF bit set, etc., may still be processed as normal UDP packets by the network stack of the receiver 104. Furthermore, if both the transmitter 102 and the receiver 104 are configured to operate in the design of fig. 2A, the computational power required by one or more processors 108, 120 (e.g., CPUs) may be reduced, and thus result in better UDP performance. Furthermore, since the DF bit is set for communication between two ethernet endpoints-unlike the conventional use of the DF bit by switches and/or intermediate network nodes for MTU discovery applications-when a packet with the DF bit set is received (e.g., where the sender of the packet is unaware that the DF bit is for communication), the packet will be retained for potential reassembly by the receiver, which is configured to interpret the DF bit as corresponding to a fragmented packet, even though the packet is not actually part of the fragmented payload. As a result, the packet may be delayed by the receiver.

Referring now to fig. 2B, fig. 2B includes a visualization 200B of a process for encoding sequence awareness information into MTU-sized frames 212A-212N of a UDP payload. Although only three MTU-sized frames 212 are shown, this is not intended to be limiting and any number (N) of MTU-sized frames may be generated depending on the size of the payload and the size of the MTU. For example, application data 202 (e.g., user data, application headers, etc.) may be generated by application 110 of transmitter 102 and/or an application layer of a network stack, UDP hardware 112 corresponding to a transport layer of the network stack may append UDP headers 204 to application data 202, IP hardware 114 corresponding to a network layer of the network stack may append IP headers 206 to UDP headers 204, and ethernet hardware 116 corresponding to a data link layer may append ethernet headers 208 to IP headers 206. The payload may then be fragmented using the USO to generate resulting MTU-sized frames 212A-212N, and the individual MTU-sized frames 212A-212N may be transmitted (e.g., using ethernet MAC hardware 118) over one or more networks 106 to the receiver 104.

In order for the receiver 104 to understand that the MTU-sized frames 212A-212N correspond to a larger payload of application data 202 that has been segmented and sequenced, the IP header 206 may be encoded by utilizing IP ID bits, TTL bits, and/or TOS bits. For example, the IP ID field of IP header 206 may be set to increment the IP ID field of each segment or new payload, and the sequence or ordering of the packets may be determined using the sequence information from the IP ID increment. In this way, since each segment is carried on an IP packet with a unique IP ID, the GRO stack of the receiver 104 can be signaled to start aggregating segmented packets and stop aggregating segmented packets (e.g., belonging to a 5-tuple flow) for reassembly according to the encoded value in the TTL field and/or the TOS field. For example, the TTL field may be set by the IP hardware 114 to an initial value (e.g., "a") that indicates to the receiver 104 that the received packet is an initial segmented packet of a larger payload. The IP hardware 114 may set the TTL field of each successive intermediate packet to increment from an initial value (e.g., "a +1," "a +2," etc.) until a final segmented packet, which may be set with a final value (e.g., "B"), indicating to the receiver 104 that the packet is the final segmented packet of the larger payload.

As a non-limiting example, the network stack of sender 102 may insert a TTL value of "2" for payload X, which may be sent as IP IDs 1-6 with TTL "2". The receiver 104 may aggregate all packets with the same TTL value on IP IDs belonging to the same 5-tuple flow. The network stack at sender 102 may then insert a value of "3" for payload Y, which may be sent as IP IDs 7-12 with TTL "3," and so on.

To ensure proper ordering and/or sequencing of received segmented packets identified as corresponding to larger payloads, the TTL, TOS, and/or IP ID fields of the IP header 206 may be utilized. For example, since the TTL field may need to be non-zero (e.g., "1" for P2P connections) for the receiver 104 to know that the packet is valid, the TTL field may be sequentially incremented from the initial value using some subset of values from 2 to 255 as the segment number of the segmented packet of the larger payload. Similarly, since the IP ID field is incremented for each packet, the IP ID field can be utilized to determine the ordering or sequence of reassembly of the payload. In some embodiments, the TOS field may be similar to and/or appended to the TTL field for sequencing and/or ordering. For example, for bits of the unused TOS field (e.g., which do not overlap with the priority definition of the packet), the values of these bits may be incremented to correspond to the sequence or ordering of the segmented packet of the larger payload.

In some embodiments, to support out-of-order or lost segment detection, reassembly signaling and sequencing signaling may be separated into different header fields, e.g., using two or more of an IP ID, TTL and/or TOS. For example, if the TTL field is used to convey signaling information for starting and stopping reassembly, the TOS field may be used to convey sequencing, and vice versa. Similarly, the IP ID field may convey sequencing while the TTL and/or TOS fields convey signaling.

In addition, IP hardware 114 may update the total length, IP checksum, and/or other header fields of IP header 206 for each segmented packet, and UDP hardware 112 may update the UDP length, checksum, and/or other header fields of UDP header 204 for each segmented packet.

Referring now to fig. 2C, fig. 2C includes a visualization 200C of a process for encoding sequence awareness information into MTU-sized frames 214A-214N of a UDP payload. Although only three MTU-sized frames 214 are shown, this is not intended to be limiting and any number (N) of MTU-sized frames may be generated depending on the size of the payload and the size of the MTU. For example, application data 202 (e.g., user data, application headers, etc.) may be generated by application 110 of transmitter 102 and/or an application layer of a network stack, UDP hardware 112 corresponding to a transport layer of the network stack may append UDP headers 204 to application data 202, IP hardware 114 corresponding to a network layer of the network stack may append IP headers 206 to UDP headers 204, and ethernet hardware 116 corresponding to a data link layer may append ethernet headers 208 to IP headers 206. The payload may then be fragmented using USO to generate resulting MTU-sized frames 214A-214N, and the individual MTU-sized frames 214A-214N may be transmitted (e.g., using ethernet MAC hardware 118) over one or more networks 106 to receiver 104.

In order for the receiver 104 to understand that the MTU-sized frames 214A-214N correspond to a larger payload of application data 202 that has been segmented and sequenced, the IP header 206 may be encoded by utilizing IP ID bits, TTL bits, and/or TOS bits, and/or the ethernet header 208 may be encoded by utilizing ethernet type bits. For example, the ethernet hardware 116 may encode a custom value corresponding to a custom ethertype into the ethertype field of the ethernet header 208. Receiver 104 may be configured to identify and process (e.g., via ethernet MAC hardware 130, such as an ethernet driver) this custom ethertype field, such as indicating that the received packet corresponds to a larger segmented payload, and receiver 104 may convert the ethertype to a value corresponding to an IP ethertype before passing the packet up the network stack to application 122.

To ensure proper ordering and/or sequencing of received segmented packets identified as corresponding to larger payloads, the TTL, TOS, and/or IP ID fields of the IP header 206 may be utilized. For example, since the TTL field may need to be non-zero (e.g., "1" for P2P connections) for the receiver 104 to know that the packet is valid, the TTL field may be sequentially incremented from the initial value using some subset of values from 2 to 255 as the segment number of the segmented packet of the larger payload. Similarly, since the IP ID field is incremented for each packet, the IP ID field can be utilized to determine the ordering or sequence of the reassembled payloads. In some embodiments, the TOS field may be similar to and/or appended to the TTL field for sequencing and/or ordering. For example, for bits of the unused TOS field (e.g., which do not overlap with the priority definition of the packet), the values of these bits may be incremented to correspond to the sequence or ordering of the segmented packet of the larger payload. In some embodiments, to indicate to the receiver 104 that aggregation of packets having a custom ethertype should be started or stopped, the TTL field and/or the TOS field may be encoded with a custom value. For example, to indicate an initial packet for reassembly, the TTL field and/or the TOS field may be set to (0,1), and to indicate a final packet for reassembly, the TTL field and/or the TOS field may be set to (1, 0). As such, the start and/or stop of reassembly may be indicated using one or more header fields (e.g., TTL field, TOS field, etc.) of the IP header 206, and sequencing and/or ordering may be performed using one or more header fields (e.g., IP ID field, TTL field, TOS field, etc.) of the IP header 206.

Once the receiver 104 has identified the custom ethertype and has determined the ordering through the IP header 206, the ethertype value for the reassembled payload may be set to correspond to the IP ethertype (e.g., IPv4, IPv6, etc.) and the IP header fields (e.g., IP ID, TTL and/or TOS) may be set to correspond to the original payload. In this way, the reassembled payload may then be passed up to the network stack of the receiver 104 for processing as a normal UDP packet.

In addition, IP hardware 114 may update the total length, IP checksum, and/or other header fields of IP header 206 for each segmented packet, and UDP hardware 112 may update the UDP length, checksum, and/or other header fields of UDP header 204 for each segmented packet.

Since the design of fig. 2C is backward compatible, packets received that are not custom ethertypes can be processed normally. For example, since the device is unaware that the design of fig. 2C may not use proprietary ethertype fields, the data packet may be processed unmodified to conform to the original network stack implementation. Additionally, because the ethernet driver (e.g., ethernet MAC hardware 130) may handle the identification of custom ethernet types, determine ordering, reassemble the packets, and then pass the reassembled packets up to the network stack, the CPU utilization of the receiver 104 may be reduced. Similarly, since the ethernet driver of transmitter 102 can generate segmented packets with custom ethertypes, the CPU usage of transmitter 102 can also be reduced.

Referring now to fig. 2D, fig. 2D includes a visualization 200D of a process for encoding sequence awareness information into MTU-sized frames 216A-216N of a UDP payload. Although only three MTU-sized frames 216 are shown, this is not intended to be limiting and any number (N) of MTU-sized frames may be generated depending on the size of the payload and the size of the MTU. For example, application data 202 (e.g., user data, application headers, etc.) may be generated by application 110 of transmitter 102 and/or an application layer of a network stack, UDP hardware 112 corresponding to a transport layer of the network stack may append UDP headers 204 to application data 202, IP hardware 114 corresponding to a network layer of the network stack may append IP headers 206 to UDP headers 204, and ethernet hardware 116 corresponding to a data link layer may append ethernet headers 208 to IP headers 206. The payload may then be fragmented using the USO to generate resulting MTU-sized frames 216A-216N, and the respective MTU-sized frames 216A-216N may be transmitted (e.g., using ethernet MAC hardware 118) over the one or more networks 106 to the receiver 104.

In order for the receiver 104 to understand that the MTU-sized frames 216A-216N correspond to a larger payload of application data 202 that has been segmented and sequenced, the IP header 206 may be encoded by utilizing IP ID bits, DF bits, TTL bits, and/or TOS bits. For example, the DF bit may be set (e.g., to "1" instead of "0") to indicate that MTU-sized frames should not be fragmented. When receiver 104 is configured to reassemble payloads that have used USO fragmentation, receiver 104 may recognize that the received segments (e.g., MTU-sized frames) are sequenced based on the DF bit being set. In addition, the IP ID field of the IP header 206 may be set to increment the IP ID field of each segment or new payload, and sequence information from the IP ID increment (and/or TOS field increment) may be utilized to determine the sequence or ordering of the packets. In this way, since each segment is carried on an IP packet with a unique IP ID, the GRO stack of the receiver 104 can be signaled to start aggregating segmented packets and stop aggregating segmented packets (e.g., belonging to a 5-tuple flow) for reassembly based on the encoded value in the TTL field. For example, since the TTL field is an 8-bit value, and the high bits (e.g., the 1 st and 2 nd bits in the bit sequence) are typically not used-e.g., because the TTL value typically does not exceed 64-the high bits can be utilized to encode the start and stop of the reassembly to the receiver 104. Thus, the high order bits may be set by the IP hardware 114 to an initial value (e.g., "(0, 1)") that indicates to the receiver 104 that the received packet is an initial segmented packet of a larger payload. The IP hardware 114 may set the TTL field of each successive intermediate packet to an intermediate value (e.g., "(1, 1)") and the IP hardware 114 may set the final packet to a final value (e.g., "(1, 0)") to indicate to the receiver 104 that the packet is a final segmented packet of a larger payload.

In some embodiments, to support out-of-order or lost segment detection, reassembly signaling and sequencing signaling may be separated into different header fields, e.g., using two or more of an IP ID, TTL and/or TOS. For example, if the TTL field is used to convey signaling information for starting and stopping reassembly, the TOS field may be used to convey sequencing, and vice versa. Similarly, the IP ID field may convey sequencing while the TTL and/or TOS fields convey signaling.

In addition, IP hardware 114 may update the total length, IP checksum, and/or other header fields of IP header 206 for each segmented packet, and UDP hardware 112 may update the UDP length, checksum, and/or other header fields of UDP header 204 for each segmented packet.

Similar to the designs of fig. 2A-2C, the design of fig. 2D may allow the receiver 104 to discard an incomplete series of USO segmented packets-e.g., in the case of receiving a sequence number, e.g., 8, but never receiving sequence number 7, the receiver 104 may know that the series is incomplete and may discard the packet. Furthermore, because the design of fig. 2D is backward compatible, received packets without the DF bit set, without a high value of the TTL set, etc. may still be processed through the network stack of the receiver 104 as normal UDP packets. Furthermore, if both the transmitter 102 and the receiver 104 are configured to operate in the design of fig. 2D, the computational power required by one or more processors 108, 120 (e.g., CPUs) may be reduced, and thus result in better UDP performance. Further, as a result of using only the upper bits of the TTL field, the system can still encode TTL related information with the TTL field (e.g., within the first 6 bits), while still allowing the upper bits to be encoded for reassembly of the start and stop indicators.

As a result, receiver 104 may reassemble application data 202 for use by application 122 when system 100 is executing according to any of the designs of fig. 2A-2D. In this manner, data from application 110 may be sent to application 122 using UDP in a still connectionless manner, but is also sequence-aware, such that receiver 104 may reassemble segmented packets in sequence even if the segmented packets arrive at receiver 104 out of order. In this way, the system 100 may benefit from the latency and bandwidth reduction associated with UDP communications while also gaining the sequencing benefits associated more generally with TCP communications.

Referring now to fig. 3, each block of the method 300 described herein includes a computational process that may be performed using any combination of hardware, firmware, and/or software. For example, various functions may be performed by a processor executing instructions stored in a memory. The method 300 may also be embodied as computer useable instructions stored on a computer storage medium. The method 300 may be provided by a stand-alone application, a service or hosted service (either alone or in combination with another hosted service), or a plug-in to another product, to name a few. Further, by way of example, the method 300 is described with respect to the system of fig. 1 and the visualization 200D of fig. 2D. However, the method 300 may additionally or alternatively be performed by any one or any combination of systems, including but not limited to those described herein.

Fig. 3 includes a flow diagram representing a method 300 for sequence-aware UDP networking in accordance with an embodiment of the present disclosure. At block B302, the method 300 includes generating a UDP payload, the UDP payload including application data from an application. For example, a payload may be generated that includes application data 202, an application header, and a UDP header 204.

At block B304, the method 300 includes generating a plurality of UDP packets, each UDP packet including a portion of the application data. For example, the transmitter (e.g., using IP hardware 114) may perform a USO operation to segment the UDP payload into a plurality of MTU-sized frames 216. Each MTU-sized frame 216 may include a portion of application data 202 such that when multiple MTU-sized frames 216 are reassembled, application data 202 may be reassembled.

At block B306, the method 300 includes encoding an initial value into a first IP header field of an initial packet, encoding a final value into a first IP header field of a final packet, and encoding intermediate values into the first IP header field of each intermediate packet. For example, the IP hardware 114 may encode an initial value (e.g., "(0, 1)") into the TTL field of a first MTU-sized frame 216A, a final value (e.g., "(1, 0")) into the TTL field of a last MTU-sized frame 216N, and an intermediate value (e.g., "(1, 1)") into the TTL field of each of the other MTU-sized frames 216A +1 through 216N-1. As such, these values may indicate to the receiver 104 when to start aggregating packets and when to stop aggregating packets and which packets correspond to the total payload.

At block B308, the method 300 includes incrementing the value of the second IP header field for each of the plurality of UDP packets. For example, the IP hardware 114 may increment the IP ID field, which begins with a first MTU-sized frame 216A and ends with a last MTU-sized frame 216N.

At block B310, the method 300 includes transmitting a plurality of UDP packets to a receiving device. For example, the transmitter 102 (e.g., using an ethernet driver, such as the ethernet MAC hardware 118) may transmit each of the plurality of MTU-sized frames 216 over the one or more networks 106.

Example computing device

Fig. 4 is a block diagram of one or more example computing devices 400 suitable for use in implementing some embodiments of the present disclosure. Computing device 400 may include an interconnect system 402, the interconnect system 402 directly or indirectly coupling the following devices: memory 404, one or more Central Processing Units (CPUs) 406, one or more Graphics Processing Units (GPUs) 408, a communication interface 410, input/output (I/O) ports 412, input/output components 414, a power supply 416, one or more presentation components 418 (e.g., one or more displays), and one or more logic units 420.

Although the various blocks of fig. 4 are shown connected with wires via the interconnect system 402, this is not intended to be limiting and is for clarity only. For example, in some embodiments, a presentation component 418 (such as a display device) may be considered an I/O component 414 (e.g., if the display is a touch screen). As another example, the CPU 406 and/or the GPU 408 may include memory (e.g., the memory 404 may also represent a storage device other than the memory of the GPU 408, the CPU 406, and/or other components). In other words, the computing device of fig. 4 is merely illustrative. Categories such as "workstation," "server," "laptop," "desktop," "tablet," "client device," "mobile device," "handheld device," "gaming machine," "Electronic Control Unit (ECU)," "virtual reality system," and/or other device or system types are not distinguished as all are considered within the scope of the computing device of fig. 4.

Interconnect system 402 may represent one or more links or buses, such as an address bus, a data bus, a control bus, or a combination thereof. The interconnect system 402 may include one or more bus or link types, such as an Industry Standard Architecture (ISA) bus, an Extended Industry Standard Architecture (EISA) bus, a Video Electronics Standards Association (VESA) bus, a Peripheral Component Interconnect (PCI) bus, a peripheral component interconnect express (PCIe) bus, and/or another type of bus or link. In some embodiments, there is a direct connection between the components. By way of example, the CPU 406 may be directly connected to the memory 404. Further, the CPU 406 may be directly connected to the GPU 408. Where there is a direct or point-to-point connection between components, interconnect system 402 may include a PCIe link to perform the connection. In these examples, the PCI bus need not be included in computing device 400.

Memory 404 can include any of a variety of computer-readable media. Computer readable media can be any available media that can be accessed by computing device 400. Computer readable media may include both volatile and nonvolatile media, and removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media.

Computer storage media may include volatile and nonvolatile, and/or removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, and/or other data types. For example, memory 404 may store computer readable instructions (e.g., representing one or more programs and/or one or more program elements such as an operating system). Computer storage media may include, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by computing device 400. As used herein, a computer storage medium does not include a signal per se.

Computer storage media may embody computer readable instructions, data structures, program modules, and/or other data types in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media. The term "modulated data signal" may refer to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, computer storage media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.

The one or more CPUs 406 may be configured to execute at least some of the computer readable instructions to control one or more components of the computing device 400 to perform one or more of the methods and/or processes described herein. The one or more CPUs 406 can each include one or more cores (e.g., one, two, four, eight, twenty-eight, seventy-two, etc.) capable of processing multiple software threads simultaneously. The one or more CPUs 406 can include any type of processor, and can include different types of processors (e.g., processors with fewer cores for mobile devices and processors with more cores for servers) depending on the type of computing device 400 implemented. For example, depending on the type of computing device 400, the processor may be an advanced instruction set computing (RISC) machine (ARM) processor implemented using RISC or an x86 processor implemented using CISC. In addition to one or more microprocessors or auxiliary coprocessors (e.g., math coprocessors), computing device 400 may also include one or more CPUs 406.

In addition to or alternatively to the one or more CPUs 406, the one or more GPUs 408 may be configured to execute at least some of the computer readable instructions to control one or more components of the computing device 400 to perform one or more of the methods and/or processes described herein. The one or more GPUs 408 may be integrated GPUs (e.g., having one or more CPUs 406 and/or the one or more GPUs 408 may be discrete GPUs in embodiments, the one or more GPUs 408 may be co-processors of the one or more CPUs 406. the computing device 400 may render graphics (e.g., 3D graphics) or perform general-purpose computing using the one or more GPUs 408. for example, the one or more GPUs 408 may be used for general-purpose computing (GPGPU) on the GPUs. the one or more GPUs 408 may include hundreds or thousands of cores capable of processing hundreds or thousands of software threads simultaneously. the one or more GPUs 408 may generate pixel data for outputting images in response to rendering commands (e.g., received from the one or more CPUs 406 via a host interface.). the one or more GPUs 408 may include graphics memory, such as display memory, for storing pixel data or any other suitable data, such as GPGPU data. Display memory may be included as part of memory 404. The one or more GPUs 408 may include two or more GPUs operating in parallel (e.g., via a link). The link may connect the GPU directly (e.g., using NVLINK) or may connect the GPU through the switch (e.g., using NVSwitch). When combined together, each GPU 408 may generate pixel data or GPGPU data for a different portion of the output or for a different output (e.g., a first GPU for a first image and a second GPU for a second image). Each GPU may include its own memory, or may share memory with other GPUs.

In addition to or in lieu of the one or more CPUs 406 and/or one or more GPUs 408, the one or more logic units 420 may be configured to execute at least some computer-readable instructions to control one or more components of the computing device 400 to perform one or more of the methods and/or processes described herein. In embodiments, the one or more CPUs 406, one or more GPUs 408, and/or one or more logic units 420 may perform any combination of methods, processes, and/or portions thereof, either separately or in combination. One or more of the logic units 420 may be part of and/or integrated within one or more CPUs 406 and/or one or more GPUs 408, and/or one or more logic units 420 may be discrete components or otherwise disposed external to one or more CPUs 406 and/or one or more GPUs 408. In an embodiment, the one or more logic units 420 may be coprocessors of the one or more CPUs 406 and/or the one or more GPUs 408.

Examples of the one or more logic units 420 include one or more processing cores and/or components thereof, such as Tensor Cores (TC), Tensor Processing Units (TPU), Pixel Vision Cores (PVC), Vision Processing Units (VPU), Graphics Processing Clusters (GPC), Texture Processing Clusters (TPC), Streaming Multiprocessors (SM), Tree Traversal Units (TTU), Artificial Intelligence Accelerators (AIA), Deep Learning Accelerators (DLA), Arithmetic Logic Units (ALU), Application Specific Integrated Circuits (ASIC), Floating Point Units (FPU), input/output (I/O) elements, Peripheral Component Interconnect (PCI) or peripheral component interconnect express (PCIe) elements, and the like.

Communication interface 410 may include one or more receivers, transmitters, and/or transceivers that enable computing device 400 to communicate with other computing devices via an electronic communication network, including wired and/or wireless communication. Communication interface 410 may include components and functionality to enable communication over any of a number of different networks, such as a wireless network (e.g., Wi-Fi, Z-Wave, bluetooth LE, ZigBee, etc.), a wired network (e.g., communicating over ethernet or wireless broadband technology), a low-power wide area network (e.g., LoRaWAN, SigFox, etc.), and/or the internet.

The I/O ports 412 can enable the computing device 400 to be logically coupled to other devices including I/O components 414, one or more presentation components 418, and/or other components, some of which can be built into (e.g., integrated into) the computing device 400. Exemplary I/O components 414 include a microphone, mouse, keyboard, joystick, game pad, game controller, satellite dish, scanner, printer, wireless device, and the like. The I/O component 414 may provide a Natural User Interface (NUI) that handles air gestures, voice, or other physiological inputs generated by a user. In some cases, the input may be passed to an appropriate network element for further processing. The NUI may implement any combination of voice recognition, stylus recognition, facial recognition, biometric recognition, on-screen and adjacent-screen gesture recognition, air gestures, head and eye tracking, and touch recognition associated with a display of computing device 400 (as described in more detail below). Computing device 400 may include a depth camera for gesture detection and recognition, such as a stereo camera system, an infrared camera system, an RGB camera system, touch screen technology, and combinations thereof. Additionally, the computing device 400 may include an accelerometer or gyroscope (e.g., as part of an Inertial Measurement Unit (IMU)) that enables detection of motion. In some examples, computing device 400 may use the output of an accelerometer or gyroscope to render immersive augmented reality or virtual reality.

The power source 416 may include a hard-wired power source, a battery power source, or a combination thereof. The power supply 416 may provide power to the computing device 400 to enable operation of the components of the computing device 400.

The one or more presentation components 418 may include a display (e.g., a monitor, touch screen, television screen, Heads Up Display (HUD), other display types, or combinations thereof), speakers, and/or other presentation components. One or more rendering components 418 may receive data from other components (e.g., one or more GPUs 408, one or more CPUs 406, etc.) and output the data (e.g., as images, videos, sounds, etc.).

The disclosure may be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program modules, being executed by a computer or other machine, such as a personal data assistant or other handheld device. Generally, program modules including routines, programs, objects, components, data structures, etc., refer to code that perform particular tasks or implement particular abstract data types. The present disclosure may be practiced in a variety of system configurations, including hand-held devices, consumer electronics, general-purpose computers, more specialty computing devices, and the like. The present disclosure may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.

As used herein, a recitation of "and/or" with respect to two or more elements is to be interpreted to mean only one element or combination of elements. For example, "element a, element B, and/or element C" may include only element a, only element B, only element C, element a and element B, element a and element C, element B and element C, or elements A, B and C. In addition, "at least one of the elements a or B" may include at least one of the elements a, at least one of the elements B, or at least one of the elements a and at least one of the elements B. Further, "at least one of the elements a and B" may include at least one of the elements a, at least one of the elements B, or at least one of the elements a and at least one of the elements B.

The subject matter of the present disclosure is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this disclosure. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps than those described in this document, in conjunction with other present or future technologies. Moreover, although the terms "step" and/or "block" may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described.

23页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:响应于对虚拟实体程序代码的验证解锁对信息的访问

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类