Placement and scheduling of radio signal processing data stream operations

文档序号:1963582 发布日期:2021-12-14 浏览:17次 中文

阅读说明:本技术 无线电信号处理数据流操作的放置与调度 (Placement and scheduling of radio signal processing data stream operations ) 是由 T·J·奥谢 于 2018-04-17 设计创作,主要内容包括:本申请涉及无线电信号处理数据流操作的放置与调度。一种实例方法提供了原始无线电信号处理计算数据流图,所述原始无线电信号处理计算数据流图包括表示操作的节点和表示数据流的有向边。分割所述原始无线电信号处理计算数据流图的所述节点和所述有向边以产生软件内核的集合,所述软件内核当在目标硬件平台的处理单元上执行时实现特定的优化目标。执行包含所述软件内核集合中的各个软件内核的数据放置的运行时资源调度,以在所述目标硬件平台的所述处理单元上高效地执行操作。然后根据限定的运行时资源调度分配所述目标硬件平台中的所述处理单元的资源。(The present application relates to placement and scheduling of radio signal processing data flow operations. An example method provides a raw radio signal processing computation dataflow graph that includes nodes representing operations and directed edges representing data flows. Segmenting the nodes and the directed edges of the raw radio signal processing computation dataflow graph to produce a set of software cores that, when executed on a processing unit of a target hardware platform, achieve a particular optimization goal. Executing a runtime resource schedule including data placement for individual ones of the set of software cores to efficiently perform operations on the processing unit of the target hardware platform. The resources of the processing unit in the target hardware platform are then allocated according to the defined runtime resource schedule.)

1. A method, comprising:

obtaining information about a radio signal processing application to be executed using a radio communication modem;

obtaining hardware configuration information about the radio communications modem;

determining a computational data flow graph corresponding to the radio signal processing application, wherein the computational data flow graph comprises one or more tensor blocks representing one or more operations of the radio signal processing application and directed edges representing data flow between the one or more tensor blocks;

partitioning the tensor blocks and the directed edges of the computational dataflow graph using the hardware configuration information and one or more machine learning models to generate, by one or more processing units of the radio communication modem, a set of software kernels for execution to achieve a particular optimization goal;

determining a runtime resource schedule for the set of software cores to perform operations on the one or more processing units of the radio communication modem; and

allocating resources of the one or more processing units of the radio communication modem according to the runtime resource schedule.

2. The method of claim 1, wherein obtaining information about the radio signal processing application comprises determining one or more digital signal processing tasks corresponding to the radio signal processing application, and

wherein determining the computational data flow graph corresponding to the radio signal processing application further comprises:

grouping the one or more digital signal processing tasks into one or more first digital signal processing tasks and one or more second digital signal processing tasks;

identifying one or more machine learning based mapping functions in place of the one or more second digital signal processing tasks; and

determining the computational dataflow graph to include the one or more first digital signal processing tasks and the one or more machine learning-based mapping functions.

3. The method of claim 2, further comprising:

determining the one or more tensor blocks to represent at least one of the one or more first digital signal processing tasks or the one or more machine learning-based mapping functions.

4. The method of claim 2, wherein the one or more digital signal processing tasks comprise one or more of gain control, synchronization, demodulation, or Forward Error Correction (FEC) decoding.

5. The method of claim 1, wherein obtaining information about the radio signal processing application comprises determining a pre-processing task and one or more digital signal processing tasks corresponding to the radio signal processing application, and

wherein determining the computational data flow graph corresponding to the radio signal processing application further comprises:

identifying one or more machine learning based mapping functions in place of the one or more digital signal processing tasks, wherein the one or more machine learning based mapping functions are trained using sample data to provide an approximate solution; and

determining the computational dataflow graph to include the pre-processing task and the one or more machine-learning based mapping functions.

6. The method of claim 5, further comprising:

determining the one or more tensor blocks to represent at least one of the pre-processing task or the one or more machine-learning-based mapping functions.

7. The method of claim 5, wherein the one or more digital signal processing tasks comprise one or more of gain control, synchronization, demodulation, or Forward Error Correction (FEC) decoding.

8. The method of claim 1, wherein the radio communication modem corresponds to one of: multiple cores, a single processor on a cellular phone, a handheld phone, a Digital Signal Processor (DSP) or an embedded processor; multiple processors of the same type, a shared memory architecture with multiple types of coprocessors, or a large number of multiprocessor computers with graphics or other tensor coprocessors in a large network configuration, and

wherein obtaining hardware configuration information about the radio communication modem comprises:

obtaining one or more data identifying available processing units in the radio communication modem, processing capabilities of the available processing units, available memory in the radio communication modem, bandwidth or communication latency of a bus or network connecting processors in the radio communication modem, available energy or battery life of the radio communication modem, hardware specifications of one or more processors or digital logic devices, available instructions and capabilities, coding toolchains, bus and interconnect information, memory capacity, rate of input information, or information about input and output information flows.

9. The method of claim 1, wherein the particular optimization goal is determined based on user input, the method further comprising:

receiving, through a user interface, one or more inputs specifying weights for one or more optimization objective functions;

combining the one or more optimization objective functions in proportion to the respective weights; and

determining the particular optimization objective using the combination of the one or more optimization objective functions.

10. The method of claim 1, wherein the particular optimization objective comprises one or more of: minimizing overall resource usage, maximizing processing unit usage across all available processing units, minimizing latency in computational dataflow graph processing by a processing unit, achieving maximum throughput, minimizing power consumption, minimizing interference while running other software processes, or minimizing processor, logic gate, or memory requirements for fixed rate or latency execution.

11. The method of claim 1, wherein partitioning the tensor blocks and the directed edges of the computational dataflow graph to produce the set of software kernels to achieve the particular optimization goal includes:

identifying the particular optimization objective as minimizing overall resource usage;

predicting one or more of: a processing capacity of each processing unit, a memory access time of each processing unit, or a communication time between the processor units of the radio communication modem; and

iteratively partitioning the tensor blocks and the directed edges of the computational dataflow graph to generate the set of software kernels that minimize the overall resource usage based on one or more of a predicted processing capacity, a predicted memory access time, or a predicted communication time.

12. The method of claim 1, wherein the radio signal processing application comprises one or more of: a cellular baseband processing task corresponding to an LTE cellular signal, a 5G cellular signal, or a 6G cellular signal; radio signal sensing, labeling, analysis, or mapping; interference or distortion correction of radio signals; radar applications; or a cellular, bluetooth or Wi-Fi radio receiver.

13. The method of claim 1, further comprising:

presenting a representation of a plurality of radio signal processing blocks on a user interface;

receiving one or more inputs through the user interface that (i) select one or more radio signal processing blocks of the plurality of radio signal processing blocks, and (ii) connect the selected radio signal processing blocks in an arrangement representative of the radio signal processing application; and

determining the computational data flow graph corresponding to the radio signal processing application using, at least in part, the one or more inputs.

14. The method of claim 13, wherein receiving the one or more inputs further comprises receiving parameters describing one or more of the selected radio signal processing blocks, wherein the method further comprises:

mapping the parameters to the radio signal processing application.

15. One or more non-transitory computer-readable media storing instructions operable, when executed on one or more processors, to cause the one or more processors to perform operations comprising:

obtaining information about a radio signal processing application to be executed using a radio communication modem;

obtaining hardware configuration information about the radio communications modem;

determining a computational data flow graph corresponding to the radio signal processing application, wherein the computational data flow graph comprises one or more tensor blocks representing one or more operations of the radio signal processing application and directed edges representing data flow between the one or more tensor blocks;

partitioning the tensor blocks and the directed edges of the computational dataflow graph using the hardware configuration information and one or more machine learning models to generate, by one or more processing units of the radio communication modem, a set of software kernels for execution to achieve a particular optimization goal;

determining a runtime resource schedule for the set of software cores to perform operations on the one or more processing units of the radio communication modem; and

allocating resources of the one or more processing units of the radio communication modem according to the runtime resource schedule.

16. The one or more non-transitory computer-readable media of claim 15, wherein obtaining information about the radio signal processing application comprises determining one or more digital signal processing tasks corresponding to the radio signal processing application, and

wherein determining the computational data flow graph corresponding to the radio signal processing application further comprises:

grouping the one or more digital signal processing tasks into one or more first digital signal processing tasks and one or more second digital signal processing tasks;

identifying one or more machine learning based mapping functions in place of the one or more second digital signal processing tasks; and

determining the computational dataflow graph to include the one or more first digital signal processing tasks and the one or more machine learning-based mapping functions.

17. The one or more non-transitory computer-readable media of claim 16, wherein the one or more digital signal processing tasks comprise one or more of gain control, synchronization, demodulation, or Forward Error Correction (FEC) decoding.

18. The one or more non-transitory computer-readable media of claim 15, wherein obtaining information about the radio signal processing application comprises determining a pre-processing task and one or more digital signal processing tasks corresponding to the radio signal processing application, and

wherein determining the computational data flow graph corresponding to the radio signal processing application further comprises:

identifying one or more machine learning based mapping functions in place of the one or more digital signal processing tasks, wherein the one or more machine learning based mapping functions are trained using sample data to provide an approximate solution; and

determining the computational dataflow graph to include the pre-processing task and the one or more machine-learning based mapping functions.

19. The one or more non-transitory computer-readable media of claim 15, wherein the particular optimization goal is determined based on user input, the operations further comprising:

receiving, through a user interface, one or more inputs specifying weights for one or more optimization objective functions;

combining the one or more optimization objective functions in proportion to the respective weights; and

determining the particular optimization objective using the combination of the one or more optimization objective functions,

wherein the particular optimization objective comprises one or more of: minimizing overall resource usage, maximizing processing unit usage across all available processing units, minimizing latency in computational dataflow graph processing by a processing unit, achieving maximum throughput, minimizing power consumption, minimizing interference while running other software processes, or minimizing processor, logic gate, or memory requirements for fixed rate or latency execution.

20. The one or more non-transitory computer-readable media of claim 15, wherein the radio signal processing application comprises one or more of: a cellular baseband processing task corresponding to an LTE cellular signal, a 5G cellular signal, or a 6G cellular signal; radio signal sensing, labeling, analysis, or mapping; interference or distortion correction of radio signals; radar applications; or a cellular, bluetooth or Wi-Fi radio receiver.

21. The one or more non-transitory computer-readable media of claim 15, wherein the operations further comprise:

presenting a representation of a plurality of radio signal processing blocks on a user interface;

receiving one or more inputs through the user interface that (i) select one or more radio signal processing blocks of the plurality of radio signal processing blocks, and (ii) connect the selected radio signal processing blocks in an arrangement representative of the radio signal processing application; and

determining the computational data flow graph corresponding to the radio signal processing application using, at least in part, the one or more inputs.

22. A system, comprising:

one or more processors; and

one or more storage devices storing instructions operable, when executed on the one or more processors, to cause the one or more processors to perform operations comprising:

obtaining information about a radio signal processing application to be executed using a radio communication modem;

obtaining hardware configuration information about the radio communications modem;

determining a computational data flow graph corresponding to the radio signal processing application, wherein the computational data flow graph comprises one or more tensor blocks representing one or more operations of the radio signal processing application and directed edges representing data flow between the one or more tensor blocks;

partitioning the tensor blocks and the directed edges of the computational dataflow graph using the hardware configuration information and one or more machine learning models to generate, by one or more processing units of the radio communication modem, a set of software kernels for execution to achieve a particular optimization goal;

determining a runtime resource schedule for the set of software cores to perform operations on the one or more processing units of the radio communication modem; and

allocating resources of the one or more processing units of the radio communication modem according to the runtime resource schedule.

23. The system of claim 22, wherein obtaining information about the radio signal processing application comprises determining one or more digital signal processing tasks corresponding to the radio signal processing application, and

wherein determining the computational data flow graph corresponding to the radio signal processing application further comprises:

grouping the one or more digital signal processing tasks into one or more first digital signal processing tasks and one or more second digital signal processing tasks;

identifying one or more machine learning based mapping functions in place of the one or more second digital signal processing tasks; and

determining the computational dataflow graph to include the one or more first digital signal processing tasks and the one or more machine learning-based mapping functions.

24. The system of claim 22, wherein obtaining information about the radio signal processing application comprises determining a pre-processing task and one or more digital signal processing tasks corresponding to the radio signal processing application, and

wherein determining the computational data flow graph corresponding to the radio signal processing application further comprises:

identifying one or more machine learning based mapping functions in place of the one or more digital signal processing tasks, wherein the one or more machine learning based mapping functions are trained using sample data to provide an approximate solution; and

determining the computational dataflow graph to include the pre-processing task and the one or more machine-learning based mapping functions.

25. The system of claim 22, wherein the operations further comprise:

presenting a representation of a plurality of radio signal processing blocks on a user interface;

receiving one or more inputs through the user interface that (i) select one or more radio signal processing blocks of the plurality of radio signal processing blocks, and (ii) connect the selected radio signal processing blocks in an arrangement representative of the radio signal processing application; and

determining the computational data flow graph for the radio signal processing application using, at least in part, the one or more inputs.

Technical Field

The present description relates to expressing, placing, and scheduling computational graph operations representing radio signal processing algorithms for execution in a target hardware platform.

Background

The target hardware platform may comprise a computing device having a single processor or multiple processors connected using a network connection, memory, or bus. The target hardware platform may be a mobile phone, a software radio system embedded processor or field programmable gate array that processes Radio Frequency (RF) data, or a large-scale data center. Multiple processors within the target hardware platform execute a software kernel that contains computational graph operations.

Determining the distribution and scheduling of operations within the software cores and the distribution of the software cores across computing devices in the target hardware platform can be challenging, for example, to account for differences in computing resource usage, power usage, throughput, and energy usage required for individual radio signal processing operations and a particular target hardware platform.

Disclosure of Invention

This specification describes techniques for expressing and dynamically allocating and scheduling radio signal processing computation graph operations across software kernels in a computing device of a target hardware platform. The techniques generally relate to methods and systems for determining optimal execution placement and scheduling of radio signal computation graph operations in view of a particular computing environment and optimization objectives.

In general, one innovative aspect of the subject matter described in this specification can be embodied in methods that include the actions of placing and scheduling radio signal processing data flow operations. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods. Configuring a system of one or more computers to perform a particular operation or action means that the system has installed thereon software, firmware, hardware, or a combination thereof that when operated causes the system to perform the operation or action. Configuring one or more computer programs to perform particular operations or actions means that the one or more programs include instructions that, when executed by data processing apparatus, cause the apparatus to perform the operations or actions.

The foregoing and other embodiments may each optionally include one or more of the following features, either alone or in combination. In particular, an embodiment comprises all the following features in combination.

One example implementation includes providing a raw radio signal processing computation dataflow graph that includes nodes representing operations and directed edges representing dataflow, the raw radio signal processing dataflow graph representing a functional radio signal processing application. Segmenting the nodes and the directed edges of the raw radio signal processing computation dataflow graph to produce a set of software cores that, when executed on a processing unit of a target hardware platform, achieve a particular optimization goal. Defining a runtime resource schedule that includes data placement for individual ones of the set of software cores to efficiently perform operations on a plurality of processing units of the target hardware platform. Allocating resources of the plurality of processing units in the target hardware platform according to a defined runtime resource schedule.

In some embodiments, prior to providing the raw radio signal processing computation data flow graph, a functional radio signal processing tensor block is provided to build a functional radio signal processing application. The functional radio signal processing tensor block represents a radio tensor symbolic expression and directed edges representing a data stream. A functional radio signal processing block data flow diagram is obtained that contains functional tensor blocks that represent a particular functional radio signal processing application. Mapping the functional radio signal processing block data flow graph to a particular raw radio signal processing computation data flow graph corresponding to a function of the particular functional radio signal processing application. The particular raw radio signal processing computation dataflow graph is then used as the raw radio signal processing computation dataflow graph.

In some embodiments, defining the resource runtime schedule further comprises: determining buffer sizes between software kernels; determining an amount of data that each software kernel executes at a given time; determining an order of execution of the software kernels; and determining the amount of information transferred over the bus or memory region at each time or moved between processor domains between kernel executions.

The respective software cores may be executed on the processing unit of the target hardware platform according to the defined resource runtime schedule.

In some embodiments, the processing unit across the target hardware platform pipelines operations. Additionally or alternatively, the operations are performed in parallel.

The functional radio signal tensor block may represent a common radio signal processing operation that acts on the input tensor data and produces an output tensor data stream. Common radio signal processing operations may include one or more of the following: finite impulse response filters, fast fourier transforms, infinite impulse response filters, digital oscillators and mixers, automatic gain control functions, synchronization algorithms, symbol modulators or demodulators, error correction encoders or decoders, GNU radio function operations, or Matlab function operations.

In some implementations, segmenting the nodes and the directed edges of the raw radio signal processing computation data flow graph to produce a set of software kernels includes: predicting an initial set of software kernels that minimizes overall resource usage of the processing unit; measuring resource usage of the processing unit having the initial set of software kernels; and altering the partitioning to produce an updated set of software kernels that achieve optimization goals based on the measured resource usage. Measuring resource usage may include measuring actual resource usage of the processing unit when the processing unit executes the initial set of software cores. Measuring resource usage may include measuring actual resource usage as data changes flow into the raw radio signal processing computation data flow graph. The data flowing into the raw radio signal processing computation data flow graph may comprise: radio frequency data, signal loading, or content type.

Achieving an optimization goal based on the measured resource usage includes: identifying the optimization objective as minimizing overall resource usage; predicting a memory access time for each processing unit; predicting a communication time between the processing units; and iteratively partitioning the nodes and the directed edges of the raw radio signal processing computation dataflow graph to produce a set of software kernels that minimize the overall resource usage based on predicted processing capacity, predicted memory access time, and predicted communication time.

The optimization objective may be: maximizing processing unit usage across all available processing units; minimizing latency for graphics processing by the processing unit; obtaining a maximum throughput; minimizing power consumption; minimizing interference with other running software processes; or minimize processor, logic gate, or memory requirements for execution at a fixed rate or latency.

In some embodiments, a second raw radio signal processing computation dataflow graph is obtained that includes nodes representing operations and directed edges representing dependencies, and the particular optimization objective is identified as minimizing overall resource usage by the processing unit when executing both the first raw radio signal processing computation dataflow graph and the second raw radio signal processing computation dataflow graph.

The processing unit of the target hardware platform may include multiple types of processing units.

Defining data placement for the respective software cores to efficiently distribute operations across the processing units of the target hardware platform may include: determining an optimal processing unit type on which to execute at least one software kernel from the multiple types of processing units in the target hardware platform; and defining a data placement of the at least one of the set of software cores relative to the processing unit of the determined optimal processing unit type in the target hardware platform.

The functional radio signal processing application may comprise: implementing a communications modem; performing cellular baseband processing tasks; performing radio sensing, labeling, analysis, or mapping; processing the radio signal to remove interference or correct distortion; or to transmit or receive radar pulses. Performing cellular baseband processing tasks may include transceiving (e.g., transmitting or receiving) Long Term Evolution (LTE) waveforms, 5G waveforms, or 6G waveforms. Performing radio sensing may include sensing radio signals for identifying threats, anomalies, hardware faults, interference, or mappings. The functional radio signal processing application may include processing a radio frequency signal to generate a radio frequency signal with interference or distortion removed. The functional radio signal processing application may include processing the radar signal to generate a pulse as a tensor and receive an impulse response and estimate properties of the reflection term.

In some implementations, the target hardware platform includes a large-scale data center with a modem that receives data from an input stream from an antenna and an analog-to-digital converter and a computing device. The target hardware platform may include at least one of: a single processor, multiple processors of the same type, a shared memory architecture with multiple types of coprocessors, a distributed memory architecture, or a network of multiple processors or multiple multi-processors (multi-multi-processor) each with a separate memory architecture.

The tensor data may include radio signals such as in-phase and quadrature time series numbers of digital samples, acoustic time series information of digital samples, power frequency spectrum information such as spectrograms, radar data cube processing information such as pulse integration, or the output of other software that may produce vectors of bits, packets, messages, samples, or values.

In some embodiments, obtaining a functional radio signal processing block data flow diagram that includes functional measurement blocks representing a particular functional radio signal processing application may comprise: providing a user interface allowing a user to select functional signal processing blocks and connect the functional signal processing blocks together to form a functional radio signal processing application; and receiving the functional radio signal processing application from the user interface in the form of a functional radio signal processing block data flow diagram.

In some embodiments, the segmenting may comprise iterative segmenting.

In some embodiments, the functional radio signal tensor block may represent a combination of one or more radio signal processing operations and machine learning operations that act on input tensor data and produce an output tensor data stream.

In another embodiment, a functional radio signal processing application may be identified for execution in a deployed radio system. A raw radio signal processing computation data flow graph including nodes representing operations and directed edges representing data flows may be obtained. This figure may represent a functional radio signal processing application and achieve specific optimization goals during execution on a hardware platform in the deployed radio system. The raw radio signal processing computation dataflow graph may be executed on the hardware platform in the deployed radio system.

Another embodiment may be a system comprising: one or more computers; and one or more storage devices storing instructions that are operable, when executed on the one or more computers, to cause the one or more computers to perform any of the above implementations.

Yet another implementation may be one or more non-transitory computer-readable storage media comprising instructions stored thereon that are executable by a processing device and that, upon such execution, cause the processing device to perform any of the above implementations.

The subject matter described in this specification can be implemented in particular embodiments to realize one or more of the following advantages.

The operation placement and scheduling techniques as described in this document may dynamically allocate and schedule radio signal processing operations for execution on a target hardware platform having a number of different configurations to provide optimal execution to achieve various optimization goals. Such optimization goals include, for example, minimizing resource usage, maximizing throughput, minimizing latency, maximizing computational processing elements, and minimizing power consumption.

Conventional techniques often do not distinguish between high-level functional capabilities and low-level computational graph operations, resulting in sub-optimal mapping to software kernels. In GNU radios, for example, functional capabilities correspond directly to a rigid predefined software kernel.

While conventional techniques are limited in the manner in which radio signal processing algorithms are expressed and the way in which the processing algorithms are placed on a particular architecture, the operation placement and scheduling methods and systems described herein are able to flatten a radio signal processing block map into a fine-grained radio signal processing operation map by joining operations across functional capabilities in order to efficiently map and schedule operations across multiple different hardware platform architecture configurations. The joined operations may then be placed and scheduled in a target hardware platform, such as within a low power small mobile radio or in a large enterprise signal processing environment, to optimize radio signal processing execution. The disclosed methods and/or systems improve over the prior art in that efficient algorithmic synthesis is performed on either homogeneous (e.g., multiple cores of the same type of processor) or heterogeneous (e.g., a collection of different types of processors and co-processors connected by some memory and/or bus) multi-core and distributed memory architecture processors, or both. The disclosed techniques for efficient expression and synthesis, scheduling and execution of radio signal processing algorithms have application primarily in low power small mobile radio devices and in large enterprise signal processing environments with multiple processors and multiple waveforms, among others.

By dynamically estimating optimal placement and scheduling of radio signal processing operations, the disclosed systems and methods ensure more efficient use of resources, such as computational capacity of various processing units, buffer sizes, memory and buffer architectures, throughput between processor domains, and placement and scheduling of operations in a target hardware platform that will achieve defined optimization goals as compared to conventional radio signal processing operation placement and scheduling techniques. Due to the mitigation of bottlenecks that limit the expansion capabilities of the application, the target hardware platform may obtain a higher aggregate throughput or a shorter latency for the application, e.g., with operations that have been optimized using the disclosed techniques. By using the disclosed techniques, the target hardware platform may also be able to reduce resource consumption, such as by reducing the minimum element clock speed required or reducing the amount of buffering required for a given fixed rate signal processing application based on optimal runtime scheduling of operations. This capability provides primarily power saving advantages over conventional systems.

The disclosed operational placement and scheduling methods and systems may learn a mapping from hardware platform capabilities to optimization objectives to optimal placement and scheduling through training. For example, the techniques may map operations to physical devices to minimize the overall cost of processing element capacity, memory access, and communication latency.

The disclosed operational placement and scheduling methods and systems may also blend traditional radio signal processing, expressed as a computational graph, with a machine learning model, also expressed as a computational graph, to determine efficient placement and scheduling for applications containing both processing types.

In addition, by allowing efficient and automated mapping from function definitions to hardware target devices via this mapping, the disclosed operation placement and scheduling methods and systems ensure that applications and waveforms will be significantly more portable between target hardware platform architectures.

The details of one or more embodiments of the subject matter of this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.

Drawings

FIG. 1 is an overall system diagram illustrating an example operational placement and scheduling system for optimally allocating and scheduling radio signal operations on a processing unit of a target hardware platform.

Figure 2 illustrates an example functional radio signal processing block data flow diagram.

Fig. 3A illustrates an example radio signal processing application that is fully defined as a digital signal processing task.

Fig. 3B illustrates an example radio signal processing application defined by a digital signal processing task and a machine learning model.

Fig. 3C illustrates an example radio signal processing application defined as a fully learning type system.

FIG. 4 illustrates an example operational placement and scheduling system that provides a user with the ability to build an application using functional data tensor blocks through a user interface.

Figure 5 shows an example of an original radio signal processing computation dataflow graph resulting from flattening a functional radio signal processing block dataflow graph such as that shown in figure 2.

FIG. 6 sets forth a flow chart illustrating an example process for determining an optimal runtime resource schedule for executing computational graph operations in a target hardware platform.

FIG. 7 illustrates an example distribution of a software kernel across multiple processing units of a target hardware platform.

FIG. 8A illustrates an example of runtime scheduling of operations, which shows a method of parallelizing operations through data of tensor dimensions.

FIG. 8B illustrates an example of runtime scheduling of operations, which shows a method by which the parallelized operations of FIG. 8A can be performed with twice the workload. Although fig. 8B shows operations being performed in parallel, in some embodiments, the operations are pipelined across the processors in the target hardware platform.

Fig. 9A illustrates an example radio receiver system in which resource and kernel placement is defined by runtime resource scheduling to efficiently perform operations on a target platform for optimized execution.

Fig. 9B illustrates an example radio transmitter system in which resource and kernel placement is defined by runtime resource scheduling to efficiently perform operations on a target platform for optimized execution.

Like reference numbers and designations in the various drawings indicate like elements.

Detailed Description

This specification generally describes an operation placement and scheduling system and method that expresses functional radio signal applications as raw radio signal processing computing dataflow graph operations and determines optimal execution placement and scheduling of operations and other resources in a target hardware platform.

The operational placement and scheduling system and method may be used to design and deploy a radio signal processing system that performs a particular application. An example radio signal processing application may sense radio signals used to identify threats, anomalies, hardware faults, and/or interference. Another example radio signal processing application may synthesize a radio communication modem for a spatial communication system, a cellular communication system, a backhaul communication system, or a military mesh networking communication system. Further example radio signal processing applications may process Radio Frequency (RF) signals to generate RF signals with interference or distortion removed. The radio signal processing application may perform cellular baseband processing tasks to transceive (e.g., transmit or receive) LTE, 5G, or 6G waveforms. The radio signal processing application may also process signals from one or more antennas at a base station or a cell tower to modulate, transmit, receive, combine, or recover bursts to or from multiple users, such as in a cellular massive Multiple Input Multiple Output (MIMO) baseband processor.

The operational placement and scheduling system and method may be used to create a radio signal processing system for processing radar signals to generate pulses, receive impulse responses, and estimate properties of reflected terms. Further, the operational placement and scheduling system and method may be used to design and deploy the system when performing large-scale data center processing. For example, the radio processing system may be installed in a data center containing a satellite internet operator operating 100 modems with various modulations and codes affecting modem complexity, each transmitting or receiving 20MHz of bandwidth from an antenna and analog-to-digital converter across a 50 x 40 megahertz (MHz) input stream. The 100 modems may run across a network environment having 20 multi-core computers with Graphics Processing Units (GPUs). The operational placement and scheduling system can estimate the optimal placement and scheduling of applications on the modem across computers.

In addition to larger data center platforms, the radio signal processing functions may also be deployed on one or more radio receivers, one or more radar processors, one or more radio transmitters, or another hardware platform or combination of hardware platforms.

FIG. 1 illustrates an example operational placement and scheduling system 100. The operational placement and scheduling system 100 is an example of a system implemented as computer programs on one or more computers at one or more locations in which the systems, components, and techniques described below may be implemented. The operation placement and scheduling system 100 determines optimal placement and scheduling of computational graph operations and resources for an application represented within the target hardware platform 110 as the raw radio signal processing computational data flow graph 102. The operational placement and scheduling model 107 takes as input an application represented as a raw radio signal processing computation dataflow graph 102 and information about the target hardware platform 106 where the application is to be scheduled and deployed.

The raw radio signal processing computation data flow graph 102 contains nodes connected by directed edges. Each node in the computational graph represents an operation. The incoming edges of a node represent the data flow to the inputs in the node, i.e., the inputs to the operations represented by the node. An outgoing edge from a node represents an output stream of an operation represented by the node to be used as an input to an operation represented by another node. Thus, a directed edge indicating that a first node in the graph is connected to a second node in the graph, the output generated using the operation represented by the first node is used as input to the operation represented by the second node. The data flowing into the raw radio signal processing computation dataflow graph may include: radio frequency environment data, signals, signal loading, data packets, or other content types.

The target hardware platform 110 may include: a single processor; a plurality of processors of the same type; shared memory architectures having multiple types of coprocessors (e.g., multiple processors of multiple types such as graphics coprocessor cards); neuromorphic processors, such as programmable logic devices like field programmable gate arrays; distributed memory architectures, such as processors with separate memory regions for each processor; or multiple processors or networks of multiple types of processors, each having multiple cores and partitioned or separate memory architectures. The target hardware platform may be a cell phone, a handheld phone, an embedded Digital Signal Processor (DSP) or multiple cores on an embedded processor, or a number of different multi-processor computers with graphics or other tensor coprocessors in a large network configuration.

The information about the target hardware platform 106 may include: identifying available processing units in the target hardware platform, processing capabilities of the processing units, data of available memory in the target hardware platform; data about a bus or network connecting processors in the target hardware platform, such as one or more of bandwidth, communication latency or speed, available energy, or battery life of computing devices in the target hardware platform; and other information about the target hardware platform 106 (e.g., hardware specifications of one or more processors or digital logic devices, available instructions and capabilities, chains of compilation tools, other software running thereon, bus and interconnect information, memory capacity, rate of input information, information about input and output information flows, etc.).

In some embodiments, the system may additionally optionally receive optimization inputs identifying one or more optimization objectives that should be emphasized during processing of the graphic, e.g., from a user. Optimization inputs may be communicated by a user, designer, or automated controller by specifying weights for one or more optimization objective functions. These weights may, for example, estimate or measure performance metrics such as throughput, power consumption, or latency, or otherwise be based on a plurality of placement and scheduling candidates. The system may select the configuration that best achieves the selected objective in proportion to the weight of the selected objective.

For example, the optimization goal may be to minimize overall resource usage. This goal may be achieved by minimizing the overall resource usage to the target hardware platform when partitioning the raw radio signal processing computation data flow graph into a set of software kernels. The system 100 may use known or predicted resource usage and capabilities of the target hardware platform in determining how to partition. The system 100 may determine the processing power and capacity of each processing unit of the target hardware platform, known or predicted memory access times to run computational graphs, known or predicted communication times required to communicate information between processing units, overhead time spent by processors on context switches, and other system behaviors and constraints that affect the performance of the scheduled application.

Other optimization objectives may be: maximizing effective processing unit usage across all available processing units; minimizing latency for graphics processing by the processing unit; obtaining a maximum throughput; minimizing power consumption of a target hardware platform computing device while executing a computational graph; minimizing interference with the computational graph while running other software processes on the target hardware platform; and minimizing resource consumption of processor, logic gate, or memory requirements for executing the graph at a fixed rate with a strict data throughput constraint, such as 18.72 mega samples per second (MSamples/sec), or with a certain minimum latency, such as a 10 millisecond deadline. This can be achieved, for example, by: the multiple scheduling and placement candidates that do not reach the required throughput or latency are explicitly reduced, and then one or more candidates are selected with the best resource usage metric as specified by the weighting of these objectives.

The operation placement and scheduling model 107 uses the inputs of the computational graph 102, the target hardware platform information 106, and optionally optimization inputs to segment the nodes and directed edges of the raw radio signal processing computational data flow graph 102 to produce a set of software kernels that, when executed on the target hardware platform, achieve the optimization goal or goals specified by the optimization inputs. In addition to generating the software kernel, the model 107 also determines the runtime resource scheduling for the software kernel and other resources in the target hardware platform 110. This process is described in more detail below with respect to fig. 6.

The system then provides the runtime resource scheduling information 108 to the target hardware platform 110 for optimal execution of the particular radio signal processing application 102 on the target hardware platform 110. The runtime resource scheduling information 108 may be an efficient mapping/implementation that may contain processor allocations for the cores, buffer sizes, locations and references between cores, memory transfer instructions between separate memory domains, work order and/or workload (e.g., number of items processed) of the software cores, and so forth. The runtime resource scheduling information may be provided in the form of a compiler, an intermediate language representation such as an abstract syntax tree, and metadata files or data structures that describe the system, its placement and execution. The target hardware platform 110 may then receive the input information data stream 103 in the form of a tensor and execute the computational graph 107 on the input 103 to produce the output 105. Tensor refers to a collection of numbers densely arranged in N dimensions, such as vectors (N-1 order tensor), matrices (N-2 order tensor), or higher dimension tensors. This input tensor data may include radio signals such as in-phase and quadrature time series numbers of digital samples, acoustic time series information of digital samples, power frequency information such as spectrograms, radar data cube processing information such as pulse integration, or the output of other software processes that may produce vectors of bits, packets, messages, samples, or values. The output 105 is the result of running a particular application graph on the input data 103 in the target hardware system.

The operational placement and scheduling system 100 represents a radio signal processing application that uses functional tensor blocks (202a-202c) as shown in fig. 2. Each functional tensor block (202a-c) represents a common signal processing operation that acts on the input tensor data (201) and produces a tensor output (203). The selected functional tensor blocks (202a-c) together constitute a radio signal processing application. Common signal processing operations may include finite impulse response filters, fast fourier transforms, infinite impulse response filters, automatic gain control, synchronization or demodulation algorithms, error correction decoding algorithms, beam steering or multi-antenna combining algorithms, or other advanced signal processing operations. One example of a similar set of advanced function signal processing blocks is the block set of GNU radio applications. The high-level functional signal processing block may also represent Matlab functionality.

For example, as shown in fig. 3A-C, the radio signal processing application may be a radio receiver that receives samples, synchronizes, demodulates, decodes, and outputs data packets or bits. As shown in the example in fig. 3A, the radio signal processing application 300a may be fully defined as a digital signal processing task.

In this example, the radio receiver receives samples 301 sent to a different digital signal processing task, such as a gain control algorithm 303. After the gain control algorithm, the input is synchronized using a synchronization algorithm 305, demodulated using a demodulation algorithm 307, and decoded using a Forward Error Correction (FEC) decoding algorithm 309. The receiver then outputs a decoded packet 313. This radio receiver processing application 300a may be defined using digital signal processing functions, which may be expressed using functional data tensor blocks 202 as shown in fig. 2, where each functional data tensor block represents a step in the radio receiver application process.

As shown by example in fig. 3B, a similar radio receiver processing application may alternatively be defined by both digital signal processing tasks and machine learning models. In this example implementation, the radio receiver processing application 300b includes a radio receiver that receives a sample input 301 and sends samples to a gain control algorithm 303 and a synchronization algorithm 305. The radio receiver processing application 300b then replaces the digital signal processing task 311 of demodulating and FEC decoding with a machine learning based mapping function. The receiver then outputs a decoded packet 313. Both traditional digital signal processing tasks and machine learning models can be defined using functional data tensor blocks 202 and tensor expressions as shown in fig. 2, so that placement and scheduling of all parts can be jointly optimized efficiently across computational resources.

In some embodiments, the radio receiver processing application may be defined as a fully learning system as shown by example in fig. 3C. In this example implementation, the radio receiver processing application 300c includes a radio receiver that receives a sample input 301, a minimum pre-processing step 302, and then replaces the machine-learning based mapping function 312 of the entire legacy digital signal processing chain for radio reception with an approximate solution trained from the sample data. In this example, the decoded data packet 313 is then output based on the machine learned mapping function. The overall application run may be defined using the functional data tensor block 202 and tensor expressions as shown in fig. 2.

In one embodiment, the user may have the ability to build a radio signal processing application using the functional data tensor block 202 through a user interface, for example, by presenting the blocks representing the functional blocks and interfacing the blocks with arrows representing data flow to and from the functional blocks. FIG. 4 illustrates an example operational placement and scheduling system 400 that provides this capability to users. The system 400 provides a user interface 405 that presents the user with functional radio signal processing block units 407. A user may connect the functional radio signal processing blocks 407 together through the user interface 405 to form a functional radio signal processing application and may enter high level parameters (e.g., channel index, preamble values, filter taps, or other specifications) describing each block. These high level parameters are mapped appropriately into the generated application. Some example radio signal processing applications include: a radio receiver (e.g., the american advanced television services advisory committee (ATSC), LTE, wireless fidelity (Wi-Fi), bluetooth, satellite, or other similar radio communication system), a radar processor (e.g., pulse generation and integration, analysis, and state machine or optimization driven dynamic control behavior), a radio transmitter or receiver, a radar or sensing application, a communications modem, or an application for processing radio signals to remove interference or correct distortion, or an application that processes radio frequency sample data to infer data underlying information (e.g., corresponding objects, behavior, potential threats, device failures or anomalies, etc.).

The system 400 receives a user's functional radio signal processing application through a user interface 405. The functional radio signal processing application may take the form of a functional radio signal processing block data flow diagram as shown in figure 2.

The functional radio signal processing block data flow graph is expressed in terms of a tensor symbolic expression that can be mapped into the flattened data flow graph by the mapping unit 410 to produce an original radio signal processing computation data flow graph. The mapping unit 410 flattens the functional radio signal processing block dataflow graph by combining original operations across multiple functional blocks to form a large graph of the original operations. For example, the function blocks may have been stored in their corresponding raw graph form, or the procedural program may be converted to a raw graph form by a process that tracks the data flow through a collection of operations. The mapping unit 410 then joins the operation without taking into account the obstacles formed by the boundary edges of the functional radio signal processing block 407 and may copy parts of the graph on different sets of processing data, for example to exploit data parallelism.

Figure 5 illustrates an example of an original radio signal processing computation dataflow graph 500 that results from flattening a functional radio signal processing block dataflow graph such as the dataflow graph shown in figure 2. In the dataflow graph 500, the function tensor blocks 202 are mapped to several tensor operations 512 without regard to the function boundaries of the function tensor blocks 202. For example, the tuning operation defined by the digital oscillator and mixer and the subsequent filtering and decimation operations may be combined into a single kernel by combining the original multiplication, addition and other tensor operations comprising each functional block, or both may be split into more than two software kernels if considered more efficient for the optimization objective and the objective hardware platform. The graph receives the same tensor inputs 510 as the original functional radio signal processing block data flow graph and outputs the same or similar outputs 513.

Referring back to fig. 4, once the system 400 flattens the functional radio signal processing block data flow graph into an original radio signal processing computation data flow graph, the operation placement and scheduling system 400 functions similarly to the operation placement and scheduling system 100 of fig. 1.

FIG. 6 illustrates a flow diagram of an example process 600 for determining an optimal runtime resource schedule for performing computational graph operations in a target hardware platform. The process 600 is performed by a system having one or more computers located at one or more locations and appropriately programmed according to the description. For example, the process 600 may be performed by a suitably programmed operation placement and scheduling system, such as the operation placement and scheduling system 100 of FIG. 1 or the operation placement and scheduling system 400 of FIG. 4.

As shown in fig. 6, to determine an optimal runtime resource schedule for executing a computational graph operation in a target hardware platform, the system provides a raw radio signal processing computational data graph 602 containing the computational graph operation. In some embodiments, the user expresses the radio signal processing application using an advanced function tensor block diagram, as described above. The system flattens the high-level graph into a low-level raw graph to optimize graph operation execution. The system then partitions the raw signal processing computation dataflow graph to produce a set of software kernels. The system makes preliminary predictions on how to partition the software kernel. The system subdivides both the operations in the graph and its edges to achieve operational parallelism. In some cases, the system may also replicate portions of the graph on multiple processors to operate on different data sets, for example, when the throughput of the corresponding sub-graph region is lower than the surrounding regions, thereby enabling additional parallelism. The system also determines the amount of data that should be traversed to each edge to provide data parallel and more efficient execution.

FIG. 7 illustrates an example distribution of a software kernel across multiple processing units of a target hardware platform. In this example, the system allocates one core of processor a, such as processor a 1740 a, to execute the core 1722 a. Processor B may have multiple cores 760a-760n that process software core 2722B, while additional cores of processor a, such as processor cores 780a-780n, process software core 3722 c.

Referring back to the process of fig. 6, the system may iteratively segment the raw signal processing computation data flow graph to determine an optimal set 604 of software kernels and mappings executing on the target hardware platform that achieve the at least one identified optimization goal.

To determine the optimal partitioning, the system may measure resource usage of the processing unit in the target hardware platform when executing the initial set of software kernels. The measured resource usage may come from actual execution, predicted execution, or simulated execution. In the case of actual execution, information from the target hardware platform as well as from the execution of the graph and its measured detailed performance metrics may be fed back to the operation placement and scheduling model for the model to assess the efficiency of the software kernel execution. The system may then iteratively change the partitioning to produce a set of software kernel updates that better achieve at least one optimization goal than the initial set of software kernels, taking into account the measured resource usage. In some embodiments, the system uses scheduling measurement information to improve segmentation. Additionally or alternatively, the system may use mappings to different processors and/or architectures to improve partitioning.

This iterative execution may improve performance, for example, because it may be difficult to determine exactly the exact performance of a large multiprocessor software system, especially under dynamic loading that depends on the nature, quality, and content of the incoming information stream (e.g., high-to-low signal-to-noise ratio, empty or full communication signals, varying sampling rates, varying channel impairment complexity, etc.).

For example, the system may predict an initial set of software kernels that minimizes overall resource usage of the processing unit in the target hardware platform. The system may then measure the predicted, simulated, or actual resource usage of the processing unit running the initial set of software kernels. The system may then change the partitioning of the software kernel to produce an updated set of software kernels that better achieve the optimization goals.

In some embodiments, to identify an initial set of software kernels or an updated set of software kernels, the system uses predictive and statistical models that include models of computational, memory, and communication costs for hardware. Additionally or alternatively, the system may use one or more machine learning models.

The machine learning model may be a neural network, a Bayesian (Bayesian) inference model, or another form of model such as a random regression or classification technique (e.g., autoregressive and moving average (ARMA) model, Support Vector Machine (SVM), etc.). Neural networks are machine learning models that employ one or more layers of neurons to generate outputs, e.g., one or more classifications or regressions, for their inputs. In addition to the output layer, the neural network may include one or more hidden layers. The output of each hidden layer may be used as an input to the next layer in the network, i.e. the next hidden layer or output layer, and the connection may also bypass the layer or return within the same layer, as in the case of a looped network element. Each layer of the neural network generates an output from its input according to the network architecture and the respective set of parameters for the layer.

The machine learning model may be trained on simulated data or actual target hardware on which computational graphs representing radio signal processing applications will be deployed, or may be tuned in operation based on a goodness measure of the output of the model, which may be used as feedback to adjust weights or parameters.

In addition to determining software kernel partitioning, the system defines a runtime resource schedule to efficiently perform operation 606 on the target hardware platform. Defining the runtime resource schedule includes determining data placement of individual software cores across the processing units of the target hardware platform. In addition to determining placement of software cores, the system may also determine buffer sizes between cores, determine the amount of data that each software core should execute at a given time, determine the order in which the software cores should execute, or determine the amount of information that should be transferred over a bus or memory area between core executions or moved between processor domains each time. Runtime scheduling may also determine the size of memory writes for inter-thread communications, the length of processing time between inter-thread communications, and the execution of the cache.

This runtime resource schedule may be determined using the same or similar model as that used to determine kernel placement. The model may be a predictive and statistical model or a machine learning model that predicts optimal scheduling and resource allocation and usage. Additionally or alternatively, the model may be based on measurements of performance from previous performance information. These models can generally predict the performance of different placement and execution scheduling configurations, enabling the selection of the best estimate of the optimal configuration from a plurality of candidates. In iteration, the accuracy of the estimates can be improved by increasing the confidence of the estimates and measurements.

Fig. 8A and B illustrate an example of runtime scheduling of operations, which illustrates a method by which operations can be parallelized through data of tensor dimensions. As shown in fig. 8A and 8B by operations 804a1 and 804a2, operations can be parallelized by data in the tensor dimension as long as the operations do not have internal states. The example runtime schedule 800a of FIG. 8A shows that the first input information contains 16 items in the first scalar dimension. The operation placement and scheduling system may use 16 elements in this first dimension to predict or simulate the execution of operation a 802 a. Operation B804 a may be parallelized with data in this first dimension to half of the elements into one instantiation of operation B804 a1 and the other half of the elements to the second instantiation of operation B804 a 2. The output from the parallelized execution of operation B is sent to operation C806 a, where operation C is run on all 16 elements of the first tensor dimension and the resulting tensor is output.

Fig. 8B illustrates the operation of fig. 8A with twice the workload. Instead of operations being performed on 16 elements at a time, operations are performed on 32 elements. The example runtime schedule 800B of FIG. 8B illustrates that the first input information contains 32 items in the first scalar dimension. The operation placement and scheduling system may use 32 elements in this first dimension to predict or simulate the execution of operation a 802 b. Operation B804B may be parallelized with data in this first dimension to half of the elements into one instantiation of operation B804B 1 and the other half of the elements to the second instantiation of operation B804B 2. The output from the parallelized execution of operation B is sent to operation C806B, where operation C is run on all 32 elements of the first tensor dimension and the resulting tensor is output.

Using a machine learning model or a predictive and statistical model or a combination of both, the system determines whether the resource usage of the runtime schedule 800B of FIG. 8B, which requires longer kernel runs and larger I/O sizes, produces more efficient performance at runtime than the runtime schedule 800a of FIG. 8A, which does half the work at each execution. The system then selects the most efficient runtime resource schedule to send to the target hardware platform.

To define efficient runtime scheduling, the system may also consider other processes running on the target hardware platform or to be executed concurrently by the target system. In one embodiment, the other process may be a second application, also represented as a raw radio signal processing computation data flow graph. In this case, the system may identify a particular runtime schedule of the two computational dataflow graphs that achieve a particular optimization goal when executed in the target system. In other cases, software not explicitly derived from a similar dataflow graph may execute on a core shared with the computation dataflow graph or separately on a core not included in the dataflow graph placement algorithm.

In another embodiment, the system may only recognize one or more other processes running on the target system and the resources being used. For example, the target system may perform high priority system tasks that are not corrupted. The operation placement and scheduling system may take into account resource availability and usage when determining runtime resource scheduling and placement of the computational graph.

In some embodiments, defining the runtime schedule includes determining an optimal processing unit type on which to perform operations from the software kernel and assigning the software kernel to at least one processing unit of the optimal processing type. For example, when assigning computational graphs across Central Processing Units (CPUs), GPUs, DSPs, tensor or vector math arithmetic coprocessors, other neuromorphic processors, and/or FPGAs, the graphs may be partitioned with initial high-rate operations on the FPGAs, with additional lower-complexity operations on the CPUs, and with higher-complexity operations on the GPUs. In some cases where a Field Programmable Gate Array (FPGA) is not available, high throughput and high complexity operations may be scheduled on the GPU while external interfaces or transforms or lower complexity operations may be placed on the CPU. In some cases, all units may be scheduled on a CPU when only the CPU is available. This placement optimization is done using the predicted performance metrics of candidate placement and execution candidates as described previously, taking into account the resources available on each computing platform.

Referring back to FIG. 6, after defining the runtime resource schedule to efficiently perform operations on the target hardware platform, the process 600 allocates resources 608 in the target hardware platform according to the defined runtime resource schedule.

FIGS. 9A and 9B illustrate an example system in which resource and kernel placement is defined through runtime resource scheduling to efficiently execute operations on a target platform to achieve optimized execution. Fig. 9A illustrates an example of a tensor data stream based radio receiver system 900a and fig. 9B illustrates an example of a tensor data stream based radio transmitter system 900B. Both systems may be created using the process 600 described above.

In some embodiments, the system may use only simulation or prediction data and a particular computational graph to determine runtime resource scheduling information for a particular target hardware implementation. The system may determine the fixed configuration of the graph and other runtime scheduling information and provide the fixed configuration to a particular locking hardware configuration, such as a mobile phone baseband processor or DSP, for execution.

In other embodiments, the system needs to execute on the target hardware platform in order to determine the correct configuration of the graph and other runtime scheduling information. For example, the system may need to run at different input sample rates or use different modulations in order to correctly determine the configuration of the computational graph of the target hardware platform.

Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, tangibly embodied in computer software or firmware, in computer hardware (including the structures disclosed in this specification and their structural equivalents), or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible, non-transitory storage medium for execution by, or to control the operation of, data processing apparatus. The computer storage medium may be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them. Alternatively or in addition, the program instructions may be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus.

The term "data processing apparatus" refers to data processing hardware and encompasses all types of devices, apparatuses, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can also be or further comprise special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can optionally include, in addition to hardware, code that creates an execution environment for the computer program, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.

A computer program, which may also be referred to or described as a program, software application, app (application program), module, software module, script, or code, may be written in any form of programmable language, including compiled or interpreted languages, or declarative or procedural languages; and a computer program can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a data communication network.

The processes or logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes or logic flows can also be performed by, and in combination with, special purpose logic circuitry, e.g., an FPGA or an ASIC.

A computer adapted to execute a computer program may be based on a general-purpose or special-purpose microprocessor or both or any other type of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a central processing unit for executing or performing instructions and one or more memory devices for storing instructions and data. The central processing unit and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. Typically, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, the computer need not have such a device. Also, a computer may be embedded in another device, e.g., a mobile telephone, a Personal Digital Assistant (PDA), a mobile audio or video player, a game player, a Global Positioning System (GPS) receiver, or a portable storage device such as a Universal Serial Bus (USB) flash drive, to name a few.

Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example: semiconductor memory devices such as Electrically Programmable Read Only Memory (EPROM), Electrically Erasable Programmable Read Only Memory (EEPROM), and flash memory devices; magnetic disks, such as internal hard disks or removable disks; magneto-optical disks; as well as compact disc-read only memory (CD-ROM) discs and digital video disc-read only memory (DVD-ROM) discs.

To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with the user; for example, feedback provided to the user can be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; and input from the user may be received in any form, including acoustic, speech, or tactile input. In addition, the computer may interact with the user by sending documents to and receiving documents from a device used by the user; for example, by sending a web page to a web browser on the user's device in response to a request received from the web browser. Moreover, the computer may interact with the user by sending a text message or other form of message to a personal device, such as a smartphone, running a messaging application, and in turn receiving the user's response message.

Embodiments of the subject matter described in this specification can be implemented with a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a client computer having a graphical user interface, a web browser, or an app through which a user can interact with an implementation of the subject matter described in this specification), or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a Local Area Network (LAN) and a Wide Area Network (WAN), such as the internet.

A computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, the server sends data, such as HTML pages, to the user device, for example, for displaying data to and receiving user input from a user interacting with the device acting as a client. Data generated at the user device, e.g., results of the user interaction, may be received at the server from the device.

While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.

Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.

Specific embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous.

33页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:服务运行方法、装置和电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!