Interoperable cloud-based media processing using dynamic network interfaces

文档序号:157542 发布日期:2021-10-26 浏览:23次 中文

阅读说明:本技术 使用动态网络接口的可互操作的基于云的媒体处理 (Interoperable cloud-based media processing using dynamic network interfaces ) 是由 伊拉吉·索达加 赵帅 于 2020-03-18 设计创作,主要内容包括:一种在运动图像专家组(MPEG)的基于网络媒体处理(NBMP)中处理媒体内容的方法,包括:获取用于处理所述媒体内容的多个任务;在NBMP工作流管理器和云管理器之间提供一个NBMP链接应用程序接口(API),将所述多个任务链接在一起,提供NBMP工作流管理器和云管理器之间的接口;通过使用NBMP链接API识别要用于处理所述媒体内容的网络资源量;以及根据所述识别出的网络资源量处理所述媒体内容。(A method of processing media content in Moving Picture Experts Group (MPEG) Network Based Media Processing (NBMP), comprising: obtaining a plurality of tasks for processing the media content; providing an NBMP link Application Program Interface (API) between the NBMP workflow manager and the cloud manager, linking the plurality of tasks together, providing an interface between the NBMP workflow manager and the cloud manager; identifying an amount of network resources to be used for processing the media content by using an NBMP link API; and processing the media content according to the identified amount of network resources.)

1. A method of processing media content in Moving Picture Experts Group (MPEG) Network Based Media Processing (NBMP), the method being performed by at least one processor, the method comprising:

obtaining a plurality of tasks for processing the media content;

providing an NBMP link Application Program Interface (API) between the NBMP workflow manager and the cloud manager to link the plurality of tasks together;

identifying an amount of network resources to be used for processing the media content by using the NBMP link API; and

processing the media content according to the identified amount of network resources.

2. The method of claim 1, the identifying an amount of network resources to be used for processing the media content comprising monitoring communication between a plurality of tasks of the link by extending NBMP quality of service (QoS) requirements according to at least one parameter.

3. The method of claim 2, the at least one parameter comprising at least one of a minimum latency, a maximum latency, a minimum throughput, a maximum throughput, and an average window.

4. The method of claim 1, the identifying an amount of network resources comprising monitoring a state of the NBMP link API during a media session.

5. The method of claim 1, the identifying an amount of network resources comprising receiving a report from the NBMP link API.

6. The method of claim 1 further comprising a function discovery API between the NBMP workflow manager and the cloud manager.

7. The method of claim 6, the function discovery API comprising discovering a pre-loaded function for processing the media content.

8. A device for processing media content in Moving Picture Experts Group (MPEG) Network Based Media Processing (NBMP), the device comprising:

at least one memory for storing program code; and

at least one processor for reading the program code and operating in accordance with the program code instructions, the program code comprising:

obtaining code for causing the at least one processor to obtain a plurality of tasks for processing the media content;

providing code for causing the at least one processor to provide an NBMP link Application Program Interface (API) between the NBMP workflow manager and the cloud manager to link the plurality of tasks together;

identifying code for causing the at least one processor to identify an amount of network resources to be used for processing the media content by using a NBMP link API; and

processing code for causing the at least one processor to process the media content in accordance with the identified amount of network resources.

9. The apparatus of claim 8, the identifying code further for causing the at least one processor to monitor communication between a plurality of tasks of the link by extending NBMP quality of service (QoS) requirements according to at least one parameter.

10. The apparatus of claim 9, the at least one parameter comprises at least one of a minimum latency, a maximum latency, a minimum throughput, a maximum throughput, and an average window.

11. The apparatus of claim 8, the identification code further for causing the at least one processor to monitor a status of the NBMP link API during a media session.

12. The apparatus of claim 8, the identification code further for causing the at least one processor to receive a report from the NBMP link API.

13. The apparatus of claim 8, further comprising a function discovery API between the NBMP workflow manager and the cloud manager.

14. The apparatus of claim 13, the function discovery API comprising discovering a pre-loaded function for processing the media content.

15. A non-transitory computer-readable medium having stored therein instructions that, when executed by at least one processor of a device for processing media content in Moving Picture Experts Group (MPEG) network-based media processing (NBMP), cause the at least one processor to:

obtaining a plurality of tasks for processing the media content;

providing an NBMP link Application Program Interface (API) between the NBMP workflow manager and the cloud manager to link the plurality of tasks together;

identifying an amount of network resources to be used for processing the media content by using an NBMP link API; and

processing the media content according to the identified amount of network resources.

16. The non-transitory computer-readable medium of claim 15, the identifying an amount of network resources to be used for processing the media content comprising monitoring communication between a plurality of tasks of the link by extending NBMP quality of service (QoS) requirements according to at least one parameter.

17. The non-transitory computer-readable medium of claim 16, the at least one parameter comprising at least one of a minimum latency, a maximum latency, a minimum throughput, a maximum throughput, and an average window.

18. The non-transitory computer-readable medium of claim 15, the identifying an amount of network resources comprising monitoring a state of the NBMP link API during a media session.

19. The non-transitory computer-readable medium of claim 15, the identifying an amount of network resources comprising receiving a report from the NBMP link API.

20. The non-transitory computer-readable medium of claim 15, further comprising a function discovery API between the NBMP workflow manager and the cloud manager.

Background

The Moving Pictures Experts Group (MPEG) network-based media processing (NBMP) project has developed the concept of processing media on the cloud. However, current NBMP designs do not provide Application Program Interface (API) abstraction for network management. Current NBMP designs provide APIs only for cloud resources such as hardware platforms. Furthermore, current NBMP workflow managers only work like task distributors, such as Hadoop master nodes, which drop tasks and distribute the tasks to all worker nodes based on a predefined computing configuration. The workflow manager can obtain worker computing resource information such as using SNMP (simple network management protocol). However, there is a problem in that it is difficult to acquire network resources such as network topology, bandwidth, latency, and quality of service (QoS) between links.

Disclosure of Invention

According to an embodiment, a method of processing media content in Moving Picture Experts Group (MPEG) Network Based Media Processing (NBMP) performed by at least one processor comprises: obtaining a plurality of tasks for processing the media content; providing an NBMP link Application Program Interface (API) between the NBMP workflow manager and the cloud manager to link the plurality of tasks together; identifying an amount of network resources to be used for processing the media content by using an NBMP link API; and processing the media content according to the identified amount of network resources.

According to an embodiment, an apparatus for processing media content in Moving Picture Experts Group (MPEG) network-based media processing (NBMP) comprises: at least one memory for storing program code; and at least one processor for reading the program code and operating in accordance with the program code instructions, the program code comprising: obtaining code for causing the at least one processor to obtain a plurality of tasks for processing the media content; providing code for causing the at least one processor to provide an NBMP link Application Program Interface (API) between the NBMP workflow manager and the cloud manager linking the plurality of tasks together; identifying code for causing the at least one processor to identify an amount of network resources to be used for processing the media content by using a NBMP link API; and processing code for causing the at least one processor to process the media content in accordance with the identified amount of network resources.

According to an embodiment, a non-transitory computer-readable medium storing instructions that, when executed by at least one processor of an apparatus for processing media content in Moving Picture Experts Group (MPEG) network-based media processing (NBMP), cause the at least one processor to: obtaining a plurality of tasks for processing the media content; providing an NBMP link Application Program Interface (API) between the NBMP workflow manager and the cloud manager to link the plurality of tasks together; identifying an amount of network resources to be used for processing the media content by using an NBMP link API; and processing the media content according to the identified amount of network resources.

Drawings

FIG. 1 is a diagram of an environment in which methods, apparatus and systems described herein may be implemented according to an embodiment;

FIG. 2 is a block diagram of exemplary components of one or more of the devices of FIG. 1;

fig. 3A is a block diagram of an NBMP system according to an embodiment;

fig. 3B is a block diagram of an NBMP system according to an embodiment;

fig. 4 is a flow diagram of a method of processing media content in MPEG NBMP, according to an embodiment;

fig. 5 is a block diagram of an apparatus for processing media content in MPEG NBMP according to an embodiment.

Detailed Description

Embodiments described herein provide functional improvements to the MPEG NBMP standard. Such improvements can increase media processing efficiency, increase the speed and reduce cost of media service deployment, and allow large-scale deployment of media services by utilizing public, private, or hybrid cloud services.

In an example, a functional improvement to the MPEG NBMP standard includes allowing the NBMP source to be a user local input, a cloud local input, or a cloud remote input. This increases the flexibility of deploying services for local applications, cloud-based applications, or applications running remotely on the cloud. Meanwhile, a single interface may be defined between the NBMP workflow manager and the cloud manager. Thus, it performs the operation of the media session through the cloud manager and the network controller. Since the cloud manager has ultimate knowledge about the cloud, it can simplify operations and make it more feasible for the NBMP workflow manager.

Furthermore, functional improvements to the MPEG NBMP standard include allowing workflow managers to have sufficient information about the network and physical computing resources. NBMP has an API to cloud resources and network managers. The API allows the NBMP workflow manager to communicate with the cloud service to configure media services, establish sessions, allocate computing and network resources without any knowledge of the cloud platform. The cloud manager translates the provided self and provided NBMP workflow manager requests and information to the internal cloud platform interface. Furthermore, the NBMP workflow manager can manage, monitor and analyze the performance of media sessions through this standard API without having to know the internal cloud platform logic. Network management requirements are also included.

Further, functional improvements to the MPEG NBMP standard include adding a function discovery API to the API between the NBMP workflow manager and the cloud manager. This allows the workflow manager to discover preferred optimized implementations of functions and use them on the cloud, rather than loading their generic implementations on the cloud.

Fig. 1 is a diagram of an environment 100 in which methods, apparatus, and systems described herein may be implemented according to an embodiment. As shown in FIG. 1, environment 100 may include user device 110, platform 120, and network 130. The devices of environment 100 may be interconnected via wired connections, wireless connections, or a combination of wired and wireless connections.

User device 110 includes one or more devices capable of receiving, generating, storing, processing, and/or providing information associated with platform 120. For example, user device 110 may include a computing device (e.g., desktop, laptop, tablet, handheld, smart speaker, server, etc.), a mobile phone (e.g., smartphone, wireless phone, etc.), a wearable device (e.g., smart glasses or smart watch), or the like. In some implementations, user device 110 may receive information from platform 120 and/or transmit information to platform 120.

Platform 120 includes one or more devices as described elsewhere herein. In some implementations, the platform 120 may include a cloud server or a group of cloud servers. In some implementations, the platform 120 may be designed to be modular such that software components may be swapped in and out according to particular needs. In this way, the platform 120 may be easily and/or quickly reconfigured for different uses.

In some implementations, as shown, the platform 120 may be hosted in a cloud computing environment 122. It is noted that although implementations described herein describe platform 120 as being hosted in cloud computing environment 122, in some implementations platform 120 may not be cloud-based (i.e., may be implemented outside of the cloud computing environment) or may be partially cloud-based.

Cloud computing environment 122 comprises an environment hosting platform 120. The cloud computing environment 122 can provide computing, software, data access, storage, etc. services that do not require end users (e.g., user devices 110) to have knowledge of the physical location and configuration of the system(s) and/or device(s) of the hosting platform 120. As shown, the cloud computing environment 122 may include a set of computing resources 124 (collectively referred to as "computing resources 124," individually referred to as "computing resources 124").

Computing resources 124 include one or more personal computers, workstation computers, server devices, or other types of computing and/or communication devices. In some implementations, the computing resources 124 may host the platform 120. Cloud resources may include computing instances executing in computing resources 124, storage devices provided in computing resources 124, data transfer devices provided by computing resources 124, and so forth. In some implementations, the computing resources 124 may communicate with other computing resources 124 via a wired connection, a wireless connection, or a combination of wired and wireless connections.

As further shown in FIG. 1, the computing resources 124 include a set of cloud resources, such as one or more application programs ("APPs") 124-1, one or more virtual machines ("VMs") 124-2, virtualized memory ("VSs") 124-3, or one or more hypervisors ("HYPs") 124-4, among others.

The application 124-1 comprises one or more software applications that may be provided to the user device 110 and/or platform 120 or accessed by the user device 110 and/or platform 120. The application 124-1 may eliminate the need to install and execute software applications on the user device 110. For example, the application 124-1 may include software associated with the platform 120 and/or any other software capable of being provided via the cloud computing environment 122. In some implementations, one application 124-1 can send information to or receive information from one or more other applications 124-1 via the virtual machine 124-2.

Virtual machine 124-2 comprises a software implementation of a machine (e.g., a computer) that executes programs like a physical machine. The virtual machine 124-2 may be a system virtual machine or a process virtual machine, depending on the degree of use and correspondence of the virtual machine 124-2 with any real machine. The system virtual machine may provide a complete system platform that supports execution of a complete operating system ("OS"). The process virtual machine may execute a single program and may support a single process. In some implementations, the virtual machine 124-2 may execute on behalf of a user (e.g., the user device 110) and may manage the infrastructure of the cloud computing environment 122, such as data management, synchronization, or long duration data transfer.

Virtualized memory 124-3 comprises one or more storage systems and/or one or more devices using virtualization techniques within a storage system or device of computing resources 124. In some implementations, in the context of a storage system, the types of virtualization can include block virtualization and file virtualization. Block virtualization may refer to abstracting (or separating) logical storage from physical storage so that a storage system may be accessed without regard to physical storage or heterogeneous structure. The separation may allow an administrator of the storage system flexibility in how the administrator manages storage for end users. File virtualization may eliminate dependencies between data accessed at the file level and the location where the file is physically stored. This may enable optimizing storage usage, server consolidation, and/or performance of non-disruptive file migration.

The hypervisor 124-4 may provide hardware virtualization techniques that allow multiple operating systems (e.g., "guest operating systems") to execute concurrently on a host computer, such as the computing resources 124. Hypervisor 124-4 may present a virtual operating platform to the guest operating system and may manage the execution of the guest operating system. Multiple instances of multiple operating systems may share virtualized hardware resources.

The network 130 includes one or more wired and/or wireless networks. For example, network 130 may include a cellular network (e.g., a fifth generation (5G) network, a Long Term Evolution (LTE) network, a third generation (3G) network, a Code Division Multiple Access (CDMA) network, etc.), a Public Land Mobile Network (PLMN), a Local Area Network (LAN), a Wide Area Network (WAN), a Metropolitan Area Network (MAN), a telephone network (e.g., the Public Switched Telephone Network (PSTN)), a private network, an ad hoc network, an intranet, the internet, a fiber-based network, etc., and/or a combination of these or other types of networks.

The number and arrangement of devices and networks shown in fig. 1 are exemplified. In practice, there may be additional devices and/or networks, fewer devices and/or networks, different devices and/or networks, or a different arrangement of devices and/or networks than those shown in fig. 1. Further, two or more of the devices shown in fig. 1 may be implemented within a single device, or a single device shown in fig. 1 may be implemented by a plurality of distributed devices. Additionally or alternatively, a set of devices (e.g., one or more devices) of environment 100 may perform one or more functions described as being performed by another set of devices of environment 100.

FIG. 2 is a block diagram of exemplary components of one or more of the devices of FIG. 1. Device 200 may correspond to user device 110 and/or platform 120. As shown in fig. 2, device 200 may include a bus 210, a processor 220, a memory 230, a storage component 240, an input component 250, an output component 260, and a communication interface 270.

Bus 210 includes components that allow communication among the components of device 200. The processor 220 is implemented in hardware, firmware, or a combination of hardware and software. Processor 220 is a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), an Accelerated Processing Unit (APU), a microprocessor, a microcontroller, a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), or another type of processing component. In some implementations, the processor 220 includes one or more processors that can be programmed to perform functions. Memory 230 includes a Random Access Memory (RAM), a Read Only Memory (ROM), and/or another type of dynamic or static storage device (e.g., flash memory, magnetic memory, and/or optical memory) that stores information and/or instructions for use by processor 220.

The storage component 240 stores information and/or software related to the operation and use of the device 200. For example, storage component 240 may include a hard disk (e.g., a magnetic disk, an optical disk, a magneto-optical disk, and/or a solid state disk), a Compact Disc (CD), a Digital Versatile Disc (DVD), a floppy disk, a tape cartridge, a magnetic tape, and/or another type of non-transitory computer-readable medium, and a corresponding drive.

Input components 250 include components that allow device 200 to receive information, such as via user input (e.g., a touch screen display, keyboard, keypad, mouse, buttons, switches, and/or microphone). Additionally or alternatively, input component 250 may include sensors for sensing information (e.g., a Global Positioning System (GPS) component, an accelerometer, a gyroscope, and/or an actuator). Output components 260 include components that provide output information from device 200 (e.g., a display, a speaker, and/or one or more Light Emitting Diodes (LEDs)).

Communication interface 270 includes transceiver-like components (e.g., a transceiver and/or separate receiver and transmitter) that enable device 200 to communicate with other devices, e.g., via a wired connection, a wireless connection, or a combination of wired and wireless connections. Communication interface 270 may allow device 200 to receive information from another device and/or provide information to another device. For example, the communication interface 270 may include an ethernet interface, an optical interface, a coaxial interface, an infrared interface, a Radio Frequency (RF) interface, a Universal Serial Bus (USB) interface, a Wi-Fi interface, a cellular network interface, and/or the like.

Device 200 may perform one or more processes described herein. Device 200 may perform these processes in response to processor 220 executing software instructions stored by a non-transitory computer-readable medium, such as memory 230 and/or storage component 240. A computer-readable medium is defined herein as a non-transitory memory device. The memory device includes memory space within a single physical memory device or memory space distributed across multiple physical memory devices.

The software instructions may be read into memory 230 and/or storage component 240 from another computer-readable medium or from another device via communication interface 270. When executed, software instructions stored in memory 230 and/or storage component 240 may cause processor 220 to perform one or more processes described herein. Additionally or alternatively, hardwired circuitry may be used in place of or in combination with software instructions to implement one or more processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software.

The number and arrangement of the components shown in fig. 2 are exemplified. In practice, device 200 may include additional components, fewer components, different components, or a different arrangement of components than those shown in FIG. 2. Additionally or alternatively, a set of components (e.g., one or more components) of device 200 may perform one or more functions described as being performed by another set of components of device 200.

Fig. 3A is a block diagram of an NBMP system according to an embodiment.

Fig. 3A illustrates the interface between the NBMP workflow manager and the cloud manager. The cloud manager may translate the requests into internal APIs and communicate them with different hardware modules.

According to an embodiment, the abstraction architecture is defined to place the cloud resource and network manager in the middle of the NBMP reference structure and extend the NBMP API to interface between the NBMP workflow manager and the cloud resource and network manager.

Fig. 3B is a block diagram of an NBMP system 301 according to an embodiment.

Referring to fig. 3B, NBMP system 301 includes NBMP source 310, NBMP workflow manager 320, function repository 330, network controller 340, one or more media processing entities 350, media source 360, and media sink 370.

NBMP source 310 may receive instructions from third party entity 380, may communicate with NBMP workflow manager 320 via an NBMP workflow API, and may communicate with function repository 330 via a function discovery API. For example, NBMP source 310 may send a workflow description document to NBMP workflow manager 320 and may read a function description of a function stored in a memory of function repository 330. These functions may include media processing functions such as media decoding functions, feature point extraction functions, camera parameter extraction functions, projection method functions, seam information extraction functions, blending functions, post-processing functions, and encoding functions. NBMP source 310 may include at least one processor and a memory storing code configured to cause the at least one processor to perform the functions of NBMP source 310.

By sending a workflow description document to NBMP workflow manager 320, NBMP source 310 may request NBMP workflow manager 320 to create a workflow that includes tasks 351 and 352 to be performed by one or more media processing entities 350. The workflow description document may include descriptors, each of which may include a parameter.

For example, NBMP source 310 may select one or more functions stored in function repository 330 and send a workflow description document to NBMP workflow manager 320 that includes descriptors for describing details such as input and output data and the selected one or more functions and requirements for the workflow. The workflow description document may further comprise a set of task descriptions and a connection map of inputs and outputs of tasks 351 and 352 to be performed by one or more of the media processing entities 350. When NBMP workflow manager 320 receives such information from NBMP source 310, NBMP workflow manager 320 may create a workflow by instantiating tasks 351 and 352 based on the function name and connecting tasks 351 and 352 according to the connection map.

Additionally or alternatively, NBMP source 310 may request NBMP workflow manager 320 to create a workflow by using a set of keywords. For example, NBMP source 310 may send a workflow description document including a set of keywords to NBMP workflow manager 320, which NBMP workflow manager 320 may use to find an appropriate function or functions stored in function repository 330. When NBMP workflow manager 320 receives such information from NBMP source 310, NBMP workflow manager 320 may create a workflow by searching for the appropriate function or functions using keywords that may be specified in the process descriptors of the workflow description document, and by providing and connecting tasks 351 and 352 using other descriptors in the workflow description document.

NBMP workflow manager 320 may communicate with function repository 330 via a function discovery API, and may communicate with one or more media processing entities 350 via an NBMP task API, an NBMP link API, and a function discovery API through network controller 340. NBMP workflow manager 320 may include at least one processor and a memory storing code configured to cause the at least one processor to perform the functions of NBMP workflow manager 320.

The NBMP link API is added to the NBMP raw structure. As shown in FIG. 3B, any link between two tasks also has an API similar to that of the task. Using the API of the link, the NBMP workflow manager can establish the required network resources and monitor the status of the link or receive reports from the link during the media session. The API should be implemented by the cloud platform. For each link, the API is established and the cloud platform instantiates an instance of it using its network controller, cloud manager, or virtual network manager.

The NBMP function discovery API is also added to the NBMP primitive structure. This API enables the NBMP to discover preloaded functions on the cloud and use them, rather than reloading them.

NBMP workflow manager 320 may use NBMP task APIs to establish, configure, manage, and monitor one or more of tasks 351 and 352 of a workflow that may be performed by one or more media processing entities 350. In an embodiment, NBMP workflow manager 320 may use NBMP task APIs to update and destroy tasks 351 and 352. To configure, manage, and monitor the tasks 351 and 352 of the workflow, NBMP workflow manager 320 may send messages, such as requests, to one or more of the media processing entities 350, where each message may have descriptors, each of which may include parameters. Each of tasks 351 and 352 may include one or more media processing functions 354 and one or more configurations 353 for the one or more media processing functions 354.

In an embodiment, upon receiving a workflow description document from NBMP source 310 that does not include a task list (e.g., includes a list of keywords instead of a task list), NBMP workflow manager 320 may select a task based on the description of the task in the workflow description document to search function repository 330 via function discovery API to find an appropriate function or functions to run as tasks 351 and 352 of the current workflow. For example, NBMP workflow manager 320 may select tasks based on keywords provided in the workflow description document. After identifying the appropriate function or functions using the set of keywords or task descriptions provided by NBMP source 310, NBMP workflow manager 320 may configure the selected tasks in the workflow by using the NBMP task API. For example, NBMP workflow manager 320 may extract configuration data from information accepted from NBMP sources and configure tasks 351 and 352 based on the extracted configuration data.

One or more media processing entities 350 may be configured to receive media content from media source 360, process the received media content according to a workflow that includes tasks 351 and 352 and is created by NBMP workflow manager 320, and output the processed media content to media sink 370. Each of the one or more media processing entities 350 may include at least one processor and memory storing code configured to cause the at least one processor to perform functions of the one or more media processing entities 350.

The network controller 340 may include at least one processor and a memory storing code configured to cause the at least one processor to perform the functions of the network controller 340.

Media source 360 may include a memory that stores media and may be integrated with or separate from NBMP source 310. In an embodiment, NBMP workflow manager 320 may notify NBMP source 310 and/or media source 360 when preparing a workflow, and media source 360 may transmit media content to one or more of media processing entities 350 based on the notification to prepare the workflow.

The media receiver 370 may include at least one processor and at least one display configured to display media content processed by one or more media processing entities 350.

The third party entity 380 may include at least one processor and a memory storing code configured to cause the at least one processor to perform the functions of the third party entity 380.

As described above, messages from NBMP source 310 (e.g., a workflow description document for requesting creation of a workflow) to NBMP workflow manager 320, and messages from NBMP workflow manager 320 to one or more media processing entities 350 (e.g., for causing a workflow to be executed) may include descriptors, each descriptor including a parameter. In an embodiment, communication between any of the components of NBMP system 301 using the API may include descriptors, each descriptor including a parameter.

Extended requirement descriptor

According to an embodiment, to establish and monitor networking between two tasks, the NBMP QoS requirements object is extended to have the parameters shown in table 1 below.

TABLE 1 QoS Requirements extension

Linking resource API

According to embodiments, the links are task-like and a simplified task resource API may be used. A link resource API can be added or a task resource API can be extended to support networking.

According to embodiments, the link API operation is similar to the task API operation, and the same configuration API may be used.

Discovering preloaded functions in a cloud

According to an embodiment, the NBMP function may already be preloaded in the cloud service. Some functions may be implemented by the cloud platform, or a third party vendor may have an optimized implementation of their functions for a particular cloud solution.

According to an embodiment, a Uniform Resource Locator (URL) in a process descriptor of a function may be utilized to identify a location of an implementation of the function. Before the NBMP workflow manager loads the implementation into the cloud, it can be queried whether the preferred implementation of the function exists in the cloud platform. If present, the NBMP workflow manager may use a particular implementation of this function. This may require that functions from the same vendor have a unique ID and that each implementation of the function have a unique ID.

According to an embodiment, the workflow creation of the NBMP process comprises:

the NBMP source creates a workflow using a workflow API. It sends the workflow description document as part of the request. The workflow manager examines the workflow description document and begins building the workflow.

2. The workflow manager sends a query or set of queries to the function registry to find the functions it will deploy to create the workflow.

3. For each query, the function registry replies with a short list of potential functions, their descriptions, and their configuration information.

4. The workflow manager selects the set of functions it wants to deploy and contacts the cloud platform according to their requirements to create the required media processing entities and loads the functions on them.

a. For each function, the workflow manager queries to find out whether the platform already supports the preferred implementation of the function. If such an implementation exists, the cloud platform returns a new ID for the function.

i. For the existing preferred function, the workflow manager uses its ID to create the task.

For a non-existent function, the workflow manager provides an authorized URL to the cloud manager to download an implementation of the function.

5. The cloud platform confirms the creation of each media processing entity, including the network access information.

6. The workflow manager creates a configuration for each task and sends the configuration to the task using the task API.

7. The task confirms the successful configuration and returns access information so that the workflow manager can connect to the next task.

8. The workflow manager confirms the creation of the workflow to the NBMP source and informs it that it can start processing media.

Thus, according to embodiments, a function discovery API may be used between the workflow manager and the cloud manager. The API may use generic and process descriptors of existing functions to query and obtain responses to the presence of functions on the cloud platform.

Fig. 4 is a flow diagram of a method 400 of processing media content in MPEG NBMP, according to an embodiment. In some implementations, one or more of the processing blocks of fig. 4 may be performed by platform 120 implementing NBMP system 301. In some implementations, one or more of the processing blocks of fig. 4 may be performed by a group of devices or another device (e.g., user device 110) separate from or including platform 120 implementing NBMP system 301.

As shown in fig. 4, in operation 410, the method 400 includes obtaining a plurality of tasks for processing media content.

At operation 420, the method 400 includes providing an NBMP link Application Program Interface (API) between the NBMP workflow manager and the cloud manager to link the plurality of tasks together.

In operation 430, the method 400 includes identifying an amount of network resources to be used for processing the media content by using the NBMP link API.

In operation 440, the method 400 includes processing the media content according to the identified amount of network resources.

The method may further include monitoring communication between the linked plurality of tasks by extending NBMP quality of service (QoS) requirements in accordance with at least one parameter. The at least one parameter includes at least one of a minimum latency, a maximum latency, a minimum throughput, a maximum throughput, and an average window.

Identifying the amount of network resources may include monitoring a status of the NBMP link API during the media session.

Identifying the amount of network resources may include receiving a report from the NBMP link API. The function discovery API may include a discovery preload function for processing media content.

Although fig. 4 illustrates exemplary operations of the method 400, in some implementations, the method 400 may include additional operations, fewer operations, different operations, or a different arrangement of operations than those illustrated in fig. 4. Additionally or alternatively, two or more operations of method 400 may be performed in parallel.

Fig. 5 is a diagram of an apparatus 500 for processing media content in MPEG NBMP, according to an embodiment. As shown in FIG. 5, apparatus 500 includes acquisition code 510, provisioning code 520, identification code 530, and processing code 540.

The obtaining code 510 is for causing at least one processor to obtain a plurality of tasks for processing media content.

Code 520 is provided for causing the at least one processor to provide an NBMP link Application Program Interface (API) between the NBMP workflow manager and the cloud manager to link the plurality of tasks together.

The identifying code 530 is for causing the at least one processor to identify an amount of network resources to be used for processing the media content by using the NBMP link API.

Processing code 540 is for causing at least one processor to process the media content according to the identified amount of network resources.

The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the embodiments to the precise form disclosed. Modifications and variations are possible in light of the above disclosure or may be acquired from practice of the implementations.

As used herein, the term component is intended to be broadly interpreted as hardware, firmware, or a combination of hardware and software.

It is to be understood that the systems and/or methods described herein may be implemented in various forms of hardware, firmware, or combinations of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of implementations. Thus, the operation and behavior of the systems and/or methods have been described herein without reference to the specific software code. It should be understood that software and hardware can be designed based on the description herein to implement the systems and/or methods.

Even if combinations of features are recited in the claims and/or disclosed in the description, these combinations are not intended to limit the disclosure of possible implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below is directly dependent on only one claim, a disclosure of possible implementations includes each dependent claim in combination with every other claim in the set of claims.

No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. In addition, as used herein, the articles "a" and "an" are intended to include one or more items, and may be used interchangeably with "one or more". Further, as used herein, the term "group" is intended to include one or more items (e.g., related items, unrelated items, combinations of related and unrelated items, etc.) and may be used interchangeably with "one or more. Where only one item is referred to, the terms "a" and "an" or similar language is used. Further, as used herein, the term "having" and the like are open-ended terms. Further, the phrase "based on" is intended to mean "based, at least in part, on" unless explicitly stated otherwise.

18页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:跨平台阻挡剧透服务

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类