Information processing apparatus, information processing method, and program

文档序号:174275 发布日期:2021-10-29 浏览:23次 中文

阅读说明:本技术 信息处理装置、信息处理方法以及程序 (Information processing apparatus, information processing method, and program ) 是由 山岸靖明 高林和彦 于 2020-03-06 设计创作,主要内容包括:本公开内容涉及能够提供无缝流传输的信息处理装置、信息处理方法和程序。通过根据客户终端中的视点方向对与所述客户终端的请求相对应的片段执行优化处理,根据从分发服务器多播的内容生成优化片段,并且将优化片段发送到客户终端。本技术可以应用于提供无缝流传输的信息处理系统。(The present disclosure relates to an information processing apparatus, an information processing method, and a program capable of providing seamless streaming. By performing optimization processing on a segment corresponding to a request of a client terminal according to a viewpoint direction in the client terminal, an optimized segment is generated from content multicast from a distribution server, and the optimized segment is transmitted to the client terminal. The present technology can be applied to an information processing system that provides seamless streaming.)

1. An information processing apparatus comprising:

an optimization processing unit that generates an optimized segment from the content multicast from the distribution server by performing optimization processing on a segment corresponding to a request of a client terminal according to a viewpoint direction in the client terminal; and

a transmitting unit that transmits the optimized segment to the client terminal.

2. The information processing apparatus according to claim 1,

the transmitting unit transmits the content stream to the client terminal via a first network, the apparatus further comprising:

a synchronization optimization processing request unit that requests another information processing apparatus that streams the content to the client terminal via the second network to execute optimization processing synchronized with the optimization processing unit in response to a handover from the first network to the second network that occurs due to movement of the client terminal.

3. The information processing apparatus according to claim 2,

the synchronization optimization processing request unit transmits, to the other information processing apparatus, information for specifying the segment, a Media Presentation Description (MPD) which is a file that describes metadata of the content, and viewpoint direction information indicating a viewpoint direction in the client terminal, and performs control to copy a processing state in the optimization processing unit.

4. The information processing apparatus according to claim 3,

the viewpoint direction information is attached to the request of the segment and transmitted from the client terminal.

5. The information processing apparatus according to claim 4,

the client terminal notifies the viewpoint direction information to the optimization processing unit by using a URL parameter defined in MPEG-DASH (MPEG dynamic adaptive streaming over HTTP).

6. The information processing apparatus according to claim 2,

the synchronization optimization processing request unit changes the stream quality of the content before and after the switching when requesting the other information processing apparatus to execute the optimization processing.

7. The information processing apparatus according to claim 6,

the synchronization optimization processing request unit changes the streaming quality of the content based on a traffic prediction in the network after the handover or a resource prediction of the another information processing apparatus.

8. An information processing method for an information processing apparatus, comprising:

generating an optimized segment from the content multicast from the distribution server by performing optimization processing on a segment corresponding to a request of a client terminal according to a viewpoint direction in the client terminal; and

and sending the optimized segments to the client terminal.

9. A program for causing a computer of an information processing apparatus to execute information processing, the information processing comprising:

generating an optimized segment from the content multicast from the distribution server by performing optimization processing on a segment corresponding to a request of a client terminal according to a viewpoint direction in the client terminal; and

and sending the optimized segments to the client terminal.

Technical Field

The present disclosure relates to an information processing apparatus, an information processing method, and a program, and more particularly, to an information processing apparatus, an information processing method, and a program capable of providing seamless streaming.

Background

In recent years, there is a fear that the processing load of the cloud increases because mobile devices are generally streaming-viewed at a very high speed. In response to such attention, load distribution of streaming services using edge computing by dispersing network resources, computing resources, storage resources, and the like arranged at an edge portion of a network draws attention as one of measures for alleviating processing load of a cloud.

For example, in edge computing, there is a limit where the various resources of a single edge server are smaller than the resources of the central cloud. Therefore, the configuration, selection, and the like of resources become complicated, and there is also a disadvantage in that the management cost increases. On the other hand, as the spread of streaming services of high-quality contents such as so-called 4K or 8K further expands in the future, a mechanism for realizing efficient operation of such edge computing resources is considered to be required.

For example, non-patent document 1 discloses a technique of distributing content using MPEG-DASH (HTTP-based MPEG dynamic adaptive streaming).

CITATION LIST

Patent document

Non-patent document 1: ISO/IEC 23009-1: 2012 dynamic adaptive streaming over HTTP (DASH) information technology

Disclosure of Invention

Problems to be solved by the invention

Incidentally, during streaming viewing on a mobile device, handover (hereinafter, referred to as inter-base station handover) occurs in which the device moves and transfers across base stations in a cell. When such an inter-base station handover occurs (that is, when the MEC environment described later is transferred), it is assumed that the use case of the MEC environment in which the destination cell is transferred, for example, the number of clients included in the cell and the execution state of the service group providing the service to the client group are different.

Therefore, even in a use case where such an inter-base station handover occurs, it is required that a Virtual Reality (VR) stream personalized to a client is not interrupted and seamless streaming can be performed. Here, seamless streaming means that streaming continues without interruption even in the case of moving across cells.

The present disclosure has been made in view of such circumstances, and an object of the present disclosure is to provide seamless streaming in a use case accompanied by occurrence of inter-base station handover.

Solution to the problem

An information processing apparatus according to an aspect of the present disclosure includes: an optimization processing unit that generates an optimized segment from the content multicast from the distribution server by performing optimization processing on a segment corresponding to a request of the client terminal according to a viewpoint direction in the client terminal; and a transmitting unit that transmits the optimized segment to the client terminal.

An information processing method or a program according to an aspect of the present disclosure includes: generating an optimized segment from the content multicast from the distribution server by performing optimization processing on a segment corresponding to a request of the client terminal according to a viewpoint direction in the client terminal; and transmitting the optimized segment to the client terminal.

In one aspect of the present disclosure, by performing optimization processing on a segment corresponding to a request of a client terminal according to a viewpoint direction in the client terminal, an optimized segment is generated from a content multicast from a distribution server, and the optimized segment is transmitted to the client terminal.

Drawings

Fig. 1 is a diagram for explaining VDO in a known general-purpose server.

Fig. 2 is a diagram showing a configuration example of an embodiment of an image processing system to which the present technology is applied.

Fig. 3 is a diagram showing a configuration example of a 5G core network system function group and an ME host.

Fig. 4 is a diagram for explaining VDO activation, push streaming, and VDO processing.

Fig. 5 is a diagram showing a first configuration example of a flow stack.

Fig. 6 is a diagram illustrating a second configuration example of a stream stack.

Fig. 7 is a diagram illustrating a third configuration example of a stream stack.

Fig. 8 is a diagram for explaining a viewport in a DASH client.

FIG. 9 is a diagram illustrating examples of viewport metrics and viewport data types.

Fig. 10 is a diagram showing an example of an MPD and a VM.

Fig. 11 is a flowchart for explaining a process of distributing a fragment subjected to VDO processing.

Fig. 12 is a diagram showing a description example of the workflow description.

Fig. 13 is a diagram showing another description example of the workflow description.

Fig. 14 is a diagram for explaining an application object.

Fig. 15 is a diagram for explaining application activation by the workflow manager.

Fig. 16 is a flowchart for explaining the application activation processing.

Fig. 17 is a diagram for explaining the view metric notification and VDO fragment generation processing.

Fig. 18 is a diagram for explaining VDO transfer due to inter-base station handover.

Fig. 19 is a diagram illustrating the movement of a DASH client from a source RAN to a target RAN.

Fig. 20 is a diagram showing a description example of keepaliadyestablishiffailed.

Fig. 21 is a diagram showing a description example of the donotcreate.

Fig. 22 is a diagram for explaining a process of transferring a VDO between ME hosts.

Fig. 23 is a flowchart for explaining VDO transfer processing by inter-base station handover.

Fig. 24 is a diagram for explaining a case where VDO cannot be activated on the ME host serving as the transfer destination due to resource shortage.

Fig. 25 is a diagram illustrating the movement of a DASH client from a source RAN to a target RAN.

Fig. 26 is a diagram for explaining processing in a case where VDO cannot be activated on the ME host serving as the transfer destination due to resource shortage.

Fig. 27 is a diagram for explaining a case where a migration destination cell cannot be predicted.

Fig. 28 is A diagram illustrating the movement of A DASH client from A source RAN to A target RAN-A.

Fig. 29 is a diagram for explaining processing performed when a migration destination cell cannot be predicted.

Fig. 30 is a diagram for explaining a process of ensuring fault-tolerant redundancy.

Fig. 31 is A diagram illustrating the movement of A DASH client from A source RAN to A target RAN-A and A target RAN-B.

Fig. 32 is a diagram for explaining a process of ensuring fault-tolerant redundancy.

Fig. 33 is a diagram showing a description example of the workflow description.

Fig. 34 is a diagram for explaining variable replication of the state of VDO processing based on traffic prediction.

Fig. 35 is a flowchart for explaining a process of variably copying the state of the VDO process based on the traffic prediction.

Fig. 36 is a diagram for explaining a clip flow when VDO processing is performed with the synchronization adaptation VDO processing request as a trigger.

Fig. 37 is a diagram showing a description example of a VDO trigger request.

Fig. 38 is a diagram showing another description example of the VDO trigger request.

Fig. 39 is a diagram for explaining VDO processing in the edge server.

Fig. 40 is a block diagram showing a configuration example of an embodiment of a computer to which the present technology is applied.

Detailed Description

Hereinafter, specific embodiments to which the present technology is applied will be described in detail with reference to the accompanying drawings.

< VDO processing in known general-purpose Server >

First, before describing an information processing system to which the present technology is applied, a viewport correlation optimizer (VDO) process in a known general-purpose server will be described with reference to fig. 1.

As shown in fig. 1, typically, a DASH client issues a segment request to a DASH server for requesting retrieval of a DASH segment. Then, in response to receiving the fragment request, the DASH server activates a VDO application (hereinafter, also simply referred to as VDO) that performs VDO processing.

At this time, one DASH server performs processing corresponding to the segment requests from the plurality of DASH clients. Thus, separately from the segment requests received from each DASH client, the VDO generates DASH segments subject to VDO processing based on the content of the notified Viewport Metrics (VMs) by each DASH client, and returns a response to the segment request (VDO segment response).

Incidentally, typically, the fragment request and the VM are sent separately from the DASH client. Therefore, in a case where the interval of the notification frequency of the notification given to the VM is longer than the segment length, particularly in a case where the granularity of the segment is fine, it may be difficult to associate the switching timing of the VM with the VDO target segment. On the other hand, if the interval of the VM notification frequency is shortened, it is assumed that the data amount of the VM becomes a load of the traffic.

< example >

Fig. 2 is a diagram for explaining a use case in which it is assumed that the information processing system to which the present technology is applied is used.

In the configuration example shown in fig. 2, the information processing system 11 includes a cloud 12 and a user terminal 13.

The cloud 12 is configured by connecting a plurality of servers via a network, and each server performs processing to provide various services. For example, in the information processing system 11, the cloud 12 may provide a streaming service that distributes VR content to the user terminal 13.

As shown, the cloud 12 has a configuration in which ME hosts 31-1 to 31-3, an origin server 32, and an ME platform (orchestrator) 33 are connected via a network. Note that the ME hosts 31-1 to 31-3 are similarly configured, and are simply referred to as the ME hosts 31 without having to distinguish them, and are also similarly referred to as configuring each block of the ME hosts 31. Also, the ME host 31 includes a VDO41 and a database saving unit 42, and the VDO41 includes a storage unit 43. Further, the ME platform (orchestrator) 33 includes a database holding unit 61 and a workflow manager 62.

For example, a smart phone, a head mounted display, or the like as shown in fig. 8 may be used as the user terminal 13, and the user terminal may receive and display VR content distributed from the cloud 12 through a streaming service. For example, the user terminal 13 has a configuration in which the DASH client 21 is installed.

Then, a 5G multiple access edge computing (MEC) architecture is assumed as a network that may be used in such use cases in the future.

Specifically, first, in the case where DASH is used as a streaming protocol, the DASH client 21 installed on the user terminal (UE: user equipment) 13 is connected to the VDO 41-1 on the ME host 31-1. The VDO41 then connects to the generic origin server 32, which is the root server of DASH streaming. For example, a multi-level hierarchy may be configured from the VDO41 to the origin server 32, similar to a general Content Delivery Network (CDN) server configuration.

The DASH client 21 is a streaming reception/reproduction application executed on the user terminal 13 that receives the stream.

VDO41 is a streaming application executing on ME host 31 that sends a stream to DASH client 21. Further, the VDO41 has a function of optimizing DASH streams, and has a function of exchanging information necessary for optimization with the DASH client 21. For example, the VDO41 has a function of performing viewport correlation (VD) optimization, which is one use case of optimization.

For example, as one method of VD optimization by the VDO41, there is a method of generating a DASH segment (viewport-related optimized segment) configured by viewport-related packaging pictures (images processed by regional packaging) optimized in the direction of the user's line of sight in the DASH client 21. Here, the optimization in the user's sight line direction is, for example, to increase the information amount or high definition of an image in the user's sight line direction.

Incidentally, in the case where the transition of the MEC environment occurs with the inter-base station handover due to the movement of the user terminal 13, it is necessary to enable seamless streaming before and after the inter-base station handover as much as possible regardless of the transition of the MEC environment. Then, as a scheme for ensuring seamless streaming, a method is considered in which the VDO 41-1 executed in the ME host 31-1(MEC environment) bound to the cell before the transfer is also transferred to the ME host 31-2 or 31-3 bound to the cell after the transfer at the same time. Specifically, a VDO application executing in the MEC environment before the transfer is executed in the MEC environment after the transfer, so that the execution state in the MEC environment before the transfer is reproduced and copied in the MEC environment after the transfer.

Then, when this method is implemented, by grasping the network traffic of the cell of the transfer destination or the load state of the resource on the ME host 31 and then copying the execution state on the ME host 31-2 of the transfer destination or the ME host 31-1 before the transfer on the ME host 31-3 before the inter-base station handover occurs, an effect that streaming can be performed seamlessly can be obtained.

In this way, by transferring the VDO 41-1 of the ME host 31-1 to the VDO 41-3 of the ME host 31-3 or the VDO 41-2 of the ME host 31-2 bound to a cell after the movement of the user terminal 13, the information processing system 11 can maximize the advantages of MEC calculation, such as low latency and load distribution.

Incidentally, in the MEC architecture including a conventional standard interface, protocol, and the like, such use case is not supported, and thus seamless streaming cannot be realized. That is, there is no MEC architecture that can be installed on mobile network capture devices with different vendors and models, MEC architecture that supports such services in a cloud environment, or the like.

In this regard, as described below, in the present embodiment, a standard protocol (application program interface (API)) and a flow (sequence) necessary for realizing seamless streaming by performing the copy of the execution state (process copy) as described above, which are used in the MEC architecture, are newly proposed.

Here, contents as known techniques will be described.

First, a known technique is to apply migration (transfer) on the ME host 31, for example, when the user terminal 14 performs switching between the ME host 31-1 and the ME host 31-2. On the other hand, the run-time (execution) resource negotiation of the migration destination is not defined as a standard protocol.

Further, in the European Telecommunications Standards Institute (ETSI), a technique of registering execution conditions (e.g., static memory and disk upper limit) of a general application in an ME platform is known. On the other hand, no specific means for implementing such registration has been discussed.

Here, newly proposed matters in the present disclosure will be further described.

First, in the ME host 31-1 near the user terminal 13, the VDO 41-1 connected to the DASH client 21 on the user terminal 13 is executed. Then, a protocol for performing state synchronization between the VDO 41-1 and the VDO 41-2 on the ME host 31-2 or the VDO 41-3 on the ME host 31-3 bound to the transfer destination base station in which the inter-base station handover of the user terminal 13 is predicted is newly proposed.

Further, a method of attaching a VM to a fragment request to be subjected to VDO processing, particularly a method of handling a case where synchronization accuracy between a fragment and a VM needs to be ensured assuming that the granularity of the fragment is fine, is proposed.

In this regard, in this embodiment, as described below, a CPU for operating the VDO 41-2 or 41-3 or resources for securing a memory area, input/output (IO) throughput, and the like are reserved on the ME host 31-2 or 31-3 of the transfer destination cell. Thereafter, the VM (state information of the user terminal 13) -based VDO processing state, which is acquired by the VDO 41-1 from the separate DASH client 21 establishing the streaming session before the transfer, is copied. That is, the processing optimized for the DASH client 21 performed by the VDO 41-1 before the transfer is replicated on the ME host 31-2 or 31-3 of the cell of the predicted transfer destination. At this point, state synchronization continues until the transition is complete. The process status synchronization includes, for example, a VDO segment generated by performing VDO processing and status information of the user terminal 13 that has been acquired.

Further, in this process state synchronization, there are a case where exactly the same processing as the VDO processing performed by the VDO 41-1 before the transition is performed by the VDO 41-2 or 41-3, and a case where the state synchronization is performed to optimize for the environment of the transition (dispatch) destination.

For example, in the case where the state synchronization is optimally performed for the environment of the transfer (scheduling) destination, the streaming quality in the environment of the transfer destination is changed to a streaming quality different from the streaming quality before the transfer. This change in the streaming quality is performed based on traffic information of the transfer destination cell, which is acquired by the VDO 41-2 or 41-3 executed in the transfer (scheduling) destination ME host 31-2 or 31-3 via the API for acquiring the ME environment information of the transfer destination installed in the transfer destination ME host 31-2 or 31-3, or the load information of the transfer destination ME host 31-2 or 31-3. Alternatively, the change in streaming quality is performed based on a prediction of recent changes according to the current environmental information.

For example, in the case where the streaming quality can be changed within the range of the limit, the streaming quality is changed within the range of the limit. Note that, in the case where the streaming quality cannot be changed within the range of this limit, even after the transfer of the user terminal 13, the processing of the transfer source VDO 41-1 is maintained, and the request to generate the necessary VDO segment is redirected from the transfer destination ME host 31-2 or 31-3 to the transfer source ME host 31-1.

Further, in the case where two or more transfer destination cells (ME host environments) exist at the same time, that is, in the case where a hierarchical relationship exists between coverage areas, in the case where a cell boundary (ME host environment boundary) exists, or in the case where a transfer destination cell (ME host environment) cannot be predicted, VDO41 is simultaneously executed in the ME hosts 31 in a plurality of transfer destination cells to perform process synchronization, or VDO41 is executed in the ME host 31 of a cell having a better environment to perform process synchronization.

Here, the 5G core network system functional group 71 in the information processing system 11 of the present embodiment and the session between the user terminal 13 and the application 82 in the ME host 31 as the MEC environment will be described with reference to fig. 3.

For example, in the information processing system 11, an edge server in edge computing can significantly improve communication delay, which is one of bottlenecks in conventional cloud computing. Further, by performing the distribution processing of the high-load application among the user terminal 13, the ME host 31 as an edge server, and the origin server 32 as a cloud server, the processing can be accelerated.

Note that the standard specification for edge calculation is defined as "ETSI-MEC". Then, the ME host in ETSI-MEC corresponds to the ME host 31 as an edge server.

In the example shown in fig. 3, the line connecting the application 82 on the ME host 31 and the user terminal 13 (the application installed above) via the access network 72 of the 5G core network system functional group 71 standardized by the third generation partnership project (3GPP) represents a user data session. Note that the access network ((R) AN)72 in the 5G core network system function group 71 includes a wired Access Network (AN) and a Radio Access Network (RAN).

In addition, an edge computing platform, referred to as ME platform 83, exists on ME host 31. Application 82 executed by ME platform 83 then exchanges user data, such as streaming data, with user terminal 13 via data plane 81, which is an abstraction of the user data session with user terminal 13. Here, the data plane 81 has a function as a User Plane Function (UPF)84 of 3 GPP. Note that the data plane 81 may have a function corresponding to the UPF 84.

Further, in a 5G (5 th generation mobile communication system) core network system function group 71, a service-based architecture is adopted, and a plurality of Network Functions (NFs) as functions of a core network are defined. These NFs are then connected via a unified interface known as a service-based interface.

In the example of fig. 3, the NF repository function (NRF: service discovery of NF), unified data management (UDM: management of subscriber information), authentication server function (AUSF: management of authentication information of UE), policy control function (PCF: control of policies on mobility and session management for proper operation of AMF and SMF), network disclosure function (NEF: provision of NF services to applications in MNO network), access and mobility management function (AMF: mobility management/authentication/authorization of UE, control of SMF) and session management function (SMF: session management of UE) are shown as NF.

< first information processing example of information processing System >

Multicast and VDO processing in the edge server will be described as a first information processing example of the information processing system 11 with reference to fig. 4 to 17.

Fig. 4 is a diagram for explaining activation of the VDO41 by the workflow manager 62, and push streaming and VDO processing.

For example, in the information processing system 11, a VDO application program executed on the ME host 31 is activated by the workflow manager 62 installed as an application on the ME platform (orchestrator) 33.

First, based on a workflow description describing network resources, computing resources, storage resources, and the like required for executing a VDO application to be activated, the workflow manager 62 secures necessary resources via the API of the ME platform 83 and activates a target application (VDO activation and VDO resource reservation/generation in fig. 4).

Thereafter, the DASH client 21 first finds the VDO41 on the nearby ME host 31 and issues a fragment request to the VDO41 requesting to obtain a DASH fragment. At this time, it is assumed that the origin server 32 performs multicast push distribution of DASH segments (or baseband streams before encoding as described later) to the VDO41 (push distribution in fig. 4).

On the other hand, VDO41 receives VMs for each DASH client 21 from a plurality of DASH clients 21. Note that any notification method may be used for VM transmission and reception, and for example, network assisted dash (sand) and server metric notifications may be used.

Here, in the present embodiment, as a notification method that cannot be handled by the metric notification of SAND, a notification method in which the URL parameter defined in DASH (23009-1: flexible insertion of annex.i.url parameter) is applied to the VM is newly proposed so that a segment to be optimized can be clearly indicated. Note that, in addition to this, for example, a method of performing storage and transfer in the header of the HTTP request of the fragment may be considered.

Then, based on the received content of the VM, the VDO41 performs VDO processing on the DASH segment (or baseband stream) pushed and distributed from the origin server 32. For example, in VDO processing, DASH segments distributed from the origin server 32 are decoded once (as in the case of a baseband stream), and the image quality and resolution in the view direction are improved by regional packetizing or the like, and the encoded data is re-encoded to generate VDO segments (DASH segments subjected to VDO processing).

Here, a configuration example of the stream stack will be described.

For example, in some cases, the origin server 32 multicasts DASH segments as a push distribution to the VDO41, while in other cases the baseband stream is multicasted prior to encoding. That is, the origin server 32 broadcasts the same segment or baseband stream file simultaneously to VDO41 on all ME hosts 31 connected to the cloud 12 using, for example, a file multicast protocol such as flute (route), or the like.

For example, in a case where the transmitting side cannot predict in advance a change in the quality (e.g., bit rate, etc.) of a request from the DASH client 21, push distribution by multicast is used. On the other hand, in a case where a change in the quality of the request can be assumed in advance, a plurality of representations having different qualities are encoded and prepared based on a specific adaptation set.

Further, in a use case of VR streaming, there is a case where a change in which image quality is optimized depending on a viewpoint direction (viewport) of an end user is prepared. In this case, in order to accurately follow the viewpoint movement of the user, a large number of optimized segment changes in all viewpoint directions need to be prepared, and thus there is a possibility that server resources and network resources are unnecessarily wasted.

Therefore, on the origin server 32 side, in a case where the quality of a segment requested from the DASH client 21, the trajectory of the viewport, and the like cannot be assumed, for example, one representation is arranged in a specific adaptation set, and one segment that is not subjected to VDO processing and has uniform image quality in the entire view angle direction (a segment having the highest quality, for example, a segment all configured with I frames) is arranged therein. Then, it is preferable to adopt a method in which this type of segment is multicast-distributed from the origin server 32 to all ME hosts 31, and in the ME hosts 31 at the edge near the DASH clients 21, DASH segments (VDO segments) subjected to VDO processing are generated based on the notified viewpoint direction information in the VMs obtained one by one from the respective DASH clients 21. That is, by adopting this method, it is assumed that waste of the distribution resources can be avoided.

Note that in some cases the quality of the segments to be multicast is encoded (compressed) at the highest quality, or in other cases the segments to be multicast are sent as a baseband stream without encoding. For example, even in baseband broadband streaming, distribution resources can be substantially saved by using multicast protocols as compared to bi-directional protocols such as HTTP/TCP.

Further, in addition to arranging one representation in the adaptation set, a case is also considered in which a segment of image quality uniform in the entire view angle direction is prepared by being divided into a plurality of bit rates or the like in advance (alternatively, there may be variations such as image quality uniform in the azimuth angle direction and image quality non-uniform in the elevation angle direction).

Fig. 5 and 6 show configuration examples of the flow stack between the DASH client 21, the VDO41, and the origin server 32.

For example, fig. 5 shows a first configuration example of a stream stack in which unidirectional multicast from the origin server 32 to the VDOs 41 is performed so that simultaneous broadcast is performed by push distribution, and streams having the same content are distributed to all VDOs 41.

Further, fig. 6 shows a second configuration example of a stream stack in which a fetch request is issued from the VDO41 side and a stream is fetched from the origin server 32 in bidirectional unicast, although it looks like push distribution as a whole. For example, in the second configuration example of the flow stack, the VDO41 side grasps the trend of requests from the slave DASH client 21, and can distribute the flow to preferentially allocate distribution resources to segments having higher resolution in the viewpoint direction having a relatively high possibility of access. That is, in the second configuration example of the flow stack, distribution may be achieved according to the trend of requests from the DASH client 21, instead of push distribution that becomes perfect simultaneous broadcast.

Fig. 7 shows a third configuration example of a flow stack in which an aggregated VDO obtained by bundling a plurality of VDOs 41 is configured in a plurality of stages. For example, the aggregation VDO is executed in a certain ME host 31 of the configuration cloud 12.

For example, in a third configuration example of the flow stack, from the origin server 32 to the aggregated VDO, the same content is distributed to all aggregated VDOs by unidirectional multicast, so as to perform simultaneous broadcast in push type. Then, in a single aggregated VDO, the tendency of requests from the DASH client 21 to be handled by its own dependent VDO41 is grasped in a hierarchically dense manner, and the stream can be distributed to preferentially allocate distribution resources to segments having higher resolution in the viewpoint direction with a relatively high possibility of access. That is, also in the third configuration example of the flow stack, distribution may be realized according to the trend of the request from the DASH client 21, instead of push distribution that becomes perfect simultaneous broadcast.

Here, a method of transmitting a VM will be described.

For example, the viewport in DASH client 21 is specified by the X, Y, and Z axes shown in fig. 8, as well as pitch, yaw, and roll rotations about these axes. At this time, assuming that the body of the end user is fixed, the angle is determined according to the movement of the end user's head. For example, the angles of pitch rotation, yaw rotation, and roll rotation increase in a clockwise direction toward a positive direction of each of the X-axis, Y-axis, and Z-axis. Furthermore, the X-Z plane is parallel to the ground, and when looking at the positive direction of the Z axis, all angles are defined as zero. Then, a mapping from the coordinate system of the viewport metadata to the coordinate system of the stream source is performed.

For example, assume that the coordinate system in the source uses a representation method using an azimuth angle and an altitude angle, and the client side uses a coordinate system represented by a pitch rotation, a yaw rotation, and a roll rotation. In this case, mapping is performed such that a direction in which the azimuth and the elevation angle (center azimuth/center elevation) of the source coordinate system are both zero coincides with a direction in which the pitch rotation, yaw rotation, and roll rotation in the DASH client 21 are all zero. Then, along the structure SphereConRegionStructure defined in Omnidirectional Media Application Format (OMAF), the viewport coordinate values in the transformed coordinate system are stored at 23090-6: the immersive media metrics are among the presentation viewport metrics (which are considered at the time of filing this application) and are set to VM.

FIG. 9 shows examples of viewport metrics (render viewport metrics) and viewport data types.

For example, as an example of a method of sending the VM to the VDO41, there is a method of using URL parameters (23009-1: annex.i.url parameter flexible insertion) defined in DASH. Note that in the present disclosure, it is newly proposed to apply the URL parameters of DASH to VM transfer.

In addition, the URL parameter of DASH inserts a query string into the fragment URL of the fragment request, and information of the DASH client 21 specified on the server side may be stored in the query string. Here, the DASH client 21 is notified of the type of information to be stored in the query string through a Media Presentation Description (MPD).

Fig. 10 shows an example of MPD and VM applied to notification of VM proposed in the present embodiment.

As shown in fig. 10, the MPD includes an adaptation set that addresses VR streams to be reproduced. Then, the urn of the data structure of the VM to which the query string that specifies the fragment URL that is used when requesting the fragment of the VR stream is to be added is stored as a result of the XML namespace "urn: mpeg: and (4) dash: schema: urlparam: 2014 "value of attribute queryString of URL query information element defined under subordinate basic attribute element of adaptation set. For example, in the example shown in fig. 10, the data structure of the JSON instance starting from the lower right field "startTime" is specified as the data structure of the VM, and "urn: the viewport metrics "is stored as the urn specifying the data structure of the VM. Thus, DASH client 21 may give instructions to add a VM to a fragment request and send the fragment request.

Fig. 11 is a flowchart for explaining the process of distributing the fragments subjected to the VDO processing by the VDO 41.

In step S11, the VDO41 transmits the MPD shown in fig. 10 to the DASH client 21. Thus, DASH client 21 receives the MPD.

In step S12, the DASH client 21 parses the MPD transmitted in step S11, and finds "urn: viewport metrics ".

Then, at "urn: in a case where the structure of the viewport metrics "is unknown in the hard code installed in the DASH client 21, the process proceeds to step S13. In step S13, the DASH client 21 searches and acquires "urn: the structure of the viewport metrics ", and then the process proceeds to step S14.

On the other hand, in "urn: in a case where the structure of the viewport metrics "is known in the hard code installed in the DASH client 21, the process skips step S13, and proceeds to step S14.

At step S14, DASH client 21 stores a viewport indicating the observed viewpoint direction in the data structure of the VM.

In step S15, the DASH client 21 URL-encodes the data structure of the VM, adds the query string to the segment URL, and sends the segment request to the VDO 41. Thus, VDO41 receives the fragment request.

In step S16, the VDO41 returns a fragment response optimized by performing VDO processing on the fragment based on the VM transmitted in step S15. Thus, DASH client 21 receives the segment response.

In step S17, the DASH client 21 reproduces the clip subjected to VDO processing based on the clip response returned in step S16. Then, similarly, the sending of the fragment request and the returning of the fragment response are repeatedly performed.

Fig. 12 and 13 show examples of workflow descriptions for the unique definitions newly proposed in this disclosure. Here, although uniquely defined, the framework of the application and workflow of media processing on the cloud is currently under specification development through media processing based on MPEG-I Network (NBMP), and the specification is not determined. Note that only VDO is described in this workflow description.

As shown in fig. 12, the element immediately below the Workflow element is an application element having a URL attribute value "VDO-ur 1", and represents the execution of the VDO application. Further, in the resource description element, resources required for executing the VDO application are described.

Here, there is no input/output connection between the applications. Further, the stream file (e.g., DASH segment file) processed and provided by the VDO41 is acquired by accessing another application as a proxy server, or is provided by accessing from another application in a file access method (e.g., HTTP) provided by a web server on which the VDO41 is installed.

Note that, as in the VDO41 and the aggregate VDO shown in the configuration example of the flow stack in fig. 7 described above, a hierarchy may be configured between the VDOs 41. Therefore, there may be the following: the stream file is first processed by the VDO41 of the upper layer and thereafter transferred to the VDO41 of its subordinate or processed by the subordinate VDO41 without being processed by the VDO41 of the upper layer.

In this regard, in the workflow description shown in fig. 13, the siblingOf attribute is newly introduced so that such a hierarchical relationship of processing between applications can be defined. Thus, this means that VDO applications can be located under different VDO instances.

Fig. 14 shows an example of an application object representing application properties of the VDO application as described above. For example, the attribute definitions of the application objects are managed by the ME platform, and the individual attributes of each application are managed based on the attribute definitions.

In the application URL (+ provider URL), the type of application of the VDO or the like is identified (the provider URL is also identified in the + option). For example, the type of Application of the VDO or the like is specified by Application @ url described in the workflow description.

The instance URI identifies the application when executing the application and is generated by the ME platform when executing the application.

The MEC system URI version is an identifier that identifies the MEC system (virtual environment) in which the application is executing.

The outline description describes an outline of processing of an application program.

Resource requirements include numerical specifications such as virtual CPU usage/sec + cycles, virtual memory/storage capacity/sec + cycles, IO throughput/sec + cycles, and are defined by MEC system URL related resource class IDs. For example, resource requirements are specified by a resource description described in a workflow description.

The application package (URL, image body) is a URL of an application execution image related to the MEC system or an image body thereof.

traffic/DNS rules (filter/DNS records) are information used to control packet routing in a 5G system via the ME platform.

Note that in the VDO activation phase from the workflow manager 62, the workflow/application [ @ url ═ VDO-url' ]/necessary resources of the VDO application described in the resource description in the workflow description are referenced.

For example, as shown in fig. 15, the ME platform 83 receiving the VDO activation request from the workflow manager 62 checks whether the resource requirements described in the resource description are satisfied on its own ME host 31. The ME platform 83 then executes the new VDO application to generate an application instance and returns the address of the instance to the workflow manager 62, in case the resource requirements described in the resource description can be met.

The application activation process will be described with reference to fig. 16.

In step S21, the workflow manager 62 generates an application object according to the application definition described in the workflow description, and requests the ME platform 83 to execute the application program. Accordingly, the ME platform 83 receives the application object sent from the workflow manager 62. Here, among the resource requirements of the application object, for example, necessary resource requirements described in the resource description of the workflow description are stored.

In step S22, before executing the application, the ME platform 83 attempts to secure various resources necessary for executing the application, such as computing resources, memory/storage, and network resources, based on the content of the application object.

In step S23, the ME platform 83 determines whether the necessary resources are ensured, and in the event that determination is made that the necessary resources are ensured, the processing proceeds to step S24.

In step S24, the ME platform 83 generates an instance ID of the application object and activates the application program. Then, in step S25, the application to be activated is activated, and processing is awaited.

On the other hand, in step S26, in a case where the ME platform 83 determines that the necessary resources cannot be ensured, the processing proceeds to step S26. In step S26, the ME platform 83 responds NACK to the workflow manager to the application object in which resource securing failed. Thus, the workflow manager 62 receives the application object transmitted from the ME platform 83.

Here, in the case where a plurality of candidate resource demands are described in the resource description, in step S27, the workflow manager 62 rewrites the resource demand part of the application object and requests the ME platform 83 to execute the application program again. Then, the process returns to step S21, the rewritten application object is transmitted, and thereafter, until a time limit set by using it as an expiration limit is reached, the similar process is repeated. Note that in the case where the time limit is exceeded, the application cannot be activated.

A process of giving notification of view metrics from the DASH client 21 and generating VDO segments in the VDO41 will be described with reference to fig. 17.

In step S31, the origin server 32 transmits the MPD to the VDO 41. Therefore, VDO41 receives the MPD.

In step S32, the DASH client 21 sends the MPD request to the VDO 41. Thus, VDO41 receives the MPD request.

In step S33, the VDO41 transmits an MPD response to the MPD request transmitted in step S32 to the DASH client 21. Thus, DASH client 21 receives the MPD response.

In step S34, the origin server 32 sends the segment to the VDO 41. Thus, VDO41 receives the fragments.

In step S35, the DASH client 21 attaches the VM to the fragment request and sends the fragment request to the VDO 41. Thus, VDO41 receives the fragment request and the VM.

In step S36, the VDO41 performs VDO processing on the target segment distributed from the origin server 32 in step S34 based on the VM accompanying the segment request transmission in step S35, and generates a VDO segment (DASH segment subjected to VDO processing).

In step S37, the VDO41 returns the VDO segment subjected to VDO processing to the DASH client 21 as a response to the segment request.

As described above, the information processing system 11 can distribute the segments from the origin server 32 to the VDO41 by multicast, perform VDO processing by the VDO41 of the ME host 31 as an edge server, and transmit the VDO segments to the DASH client 21.

< second information processing example of information processing System >

As a second information processing example of the information processing system 11, a process of synchronizing the VDO41 between the replication ME hosts 31 with the inter-base station switching of the DASH client 21 will be described with reference to fig. 18 to 23. For example, the second information processing example of the information processing system 11 has a characteristic that when the environment of the ME host 31 changes due to inter-base station switching of the DASH client 21, a CPU, a storage area, an IO throughput, and the like for operating the VDO41 are reserved on the ME host 31 of the transfer destination, and the processing state of the VDO41 is synchronously copied with respect to the VDO41 on the ME host 31 of the transfer destination before the transfer.

For example, as shown in fig. 18, when the user terminal 13 mounting the DASH client 21 moves, inter-base station handover occurs between base stations mounting the ME host 31. Hereinafter, the access network 72 before handover is referred to as a source RAN 72S, and the ME host 31 before handover is referred to as a source ME host 31S. Similarly, the access network 72 as the handover destination is referred to as a target RAN72T, and the ME host 31 as the handover destination is referred to as a target ME host 31T.

Then, the user terminal 13 of the DASH-client 21 streamed from the VDO application via the source RAN 72S is installed on the source ME host 31S of the source RAN 72S bound to a certain base station, and moved to the target ME host 31T of another base station to which the target ME host 31T is bound. Due to the inter-base station handover accompanying this movement, the VDO 41S of the source ME host 31S shifts to the VDO 41T of the source ME host 31T as indicated by the two-dot chain line arrow in fig. 18.

The processing executed at this time will be described with reference to the flow of fig. 22. Note that the flow of fig. 22 shows the processing after streaming has been started between the DASH client 21 and the VDO 41S.

That is, the DASH client 21 on the user terminal 13 installed on the source RAN 72S has performed streaming (streaming from the source ME host in fig. 22) based on the streaming file subjected to VDO processing from the VDO 41S on the source ME host 31S.

Here, as shown in fig. 19, it is assumed that the user terminal 13 mounting the DASH client 21 moves on a target RAN72T of a base station to which a target ME host 31T different from the source ME host 31S is bound.

Further, the VDO 41S executing on the source ME host 31S can detect the movement (location) of the user terminal 13 on which the DASH client 21 is installed through the API provided by the ME platform 83S. Then, it is assumed one by one that the user terminal 13 is predicted to move from the currently existing source RAN 72S to another target RAN72T (prediction of the transition to the target ME host in fig. 22) based on the location transition information, the mobility pattern analysis using the statistical information and the AI, and the like.

Then, the VDO 41S on the source ME host 31S requests the ME platform (coordinator) 33 to execute the VDO 41T on the target ME host 31T (VDO activation in fig. 22). That is, before the DASH client 21 transitions to the target RAN72T, the corresponding VDO application is separately generated on the target ME host 31T and the execution state of the application is copied.

In response, ME platform 83T of target ME host 31T attempts to reserve and execute resources based on the protocol resource requirements equivalent to the streaming session (VDO resource reservation/generation in fig. 22) currently established between DASH client 21 and VDO 41.

Here, the policy (e.g., maintaining a currently established session with DASH client 21, continuing a service at a lower (or improved) quality than the expected traffic of the transition destination cell and the load state of ME platform 83T but the currently established session is of lower quality), and maintaining a session with VDO 41S on the source ME host 31S currently established with DASH client 21 if a quality reduction is required) is based on the specification of "session update on handoff policy" as described in the workflow description shown in fig. 20.

For example, in the case of keepaliddededifffailed, the default value of workflow/Policy @ keepalidedesestabledifdfailed is false, and in the case where this attribute is not described, it always indicates promotion if possible (e.g., improving streaming quality) and demotion if necessary (e.g., reducing streaming quality). Note that, in the third information processing example, processing in the case where a change occurs due to a switch or the like in the MEC environment will be described.

Further, in VDO 41T on the target ME host 31T, necessary resources are reserved and activated (VDO resource reservation/generation in fig. 22), but the streaming process to the DASH client 21 does not start immediately. For example, the VDO 41T receives a synchronization VDO processing request for requesting synchronization VDO processing from the VDO 41S on the source ME host 31S, and executes VDO processing in synchronization with the VDO 41S on the source ME host 31S.

Assume that after a period of time has elapsed, the VDO 41S on the source ME host 31S detects that the user terminal 13 on which the DASH client 21 is installed has moved via the ME platform 83S and is newly connected to the target RAN72T to which the target ME host 31T is bound (detection of transfer of the DASH client to the target ME host in fig. 22).

In response to this, the traffic change is performed so that the VDO 41T on the target ME host 31T can receive the streaming request from the target RAN72T after the transfer (traffic update to the target ME host in fig. 22). At the same time, the traffic change is performed so that the response from the VDO 41T on the target ME host 31T reaches the user terminal 13.

Thus, VDO 41T on target ME host 31T starts streaming to DASH client 21 (streaming from target ME host in fig. 22) after moving via target RAN 72T.

Note that, as shown in fig. 21, in the workflow description, it may be specified whether or not the transfer itself between the ME hosts 31 of the VDO application is allowed. For example, Policy @ donotcreate ═ true' is specified in the case where the transfer itself between the ME hosts 31 of the VDO application is not allowed. On the other hand, in the case of attempting a migration of the ME host 31, Policy @ donotcreate ═ false' is specified, which is set as a default value, instead of utilizing the MEC as much as possible. Of course, in the case of Policy @ donnotmiglate ═ true', keepaliadyestabledifdailed shown in fig. 20 is omitted.

Fig. 23 is a flowchart for explaining VDO transfer processing by inter-base station handover.

In step S41, the origin server 32 sends the segment to VDO 41S and VDO 41T. Thus, VDO 41S and VDO 41T receive these fragments.

In step S42, the DASH client 21 attaches the VM to the fragment request and sends the fragment request to the VDO 41S. Thus, VDO 41S receives the fragment request and the VM.

In step S43, the VDO 41S transmits the segment URL, MPD, and VM to the VDO 41T, and requests a synchronization VDO process. Thus, VDO 41T receives segment URL, MPD and VM.

In step S44, the VDO 41S performs VDO processing to generate a VDO fragment, and in step S45, the VDO 41T performs synchronized VDO processing to generate a VDO fragment.

Thereafter, when the switching occurs, in step S46, the DASH client 21 attaches the VM to the fragment request and sends the fragment request to the VDO 41T. Thus, VDO 41T receives the fragment request and VM.

In step S47, the VDO 41T returns the VDO fragment subjected to the VDO processing in step S45 to the DASH client 21 as a response to the fragment request.

As described above, in the information processing system 11, by synchronously copying VDO processing of the VDO41 between the ME hosts 31 and inter-base station switching of the DASH client 21, it is possible to seamlessly transmit VDO segments to the DASH client 21.

Processing in the case where VDO41 cannot be activated on the ME host 31 serving as the transfer destination due to resource shortage will be described with reference to fig. 24 to 26.

Fig. 24 shows the states before and after the transition of inter-base station handover that accompanies the movement of the DASH client 21 from the source RAN 72S to the target RAN72T (see fig. 25).

That is, the user terminal 13 that installed the DASH-client 21 streamed from the VDO application via the source RAN 72S on the source ME host 31S of the source RAN 72S bound to a certain base station moves onto the target ME host 31T of another base station to which the target ME host 31T is bound. The VDO 41S of the source ME host 31S attempts a transfer due to the inter-base station handover accompanying the movement, but the VDO 41T of the source ME host 31T may not be activated due to resource shortage.

In this case, when the VDO 41S of the source ME host 31S is maintained, the VDO 41S may send the segment received from the origin server 32 from the data plane 81S to the data plane 81T and send the segment to the DASH client 21 via the target RAN 72T.

The processing executed at this time will be described with reference to the flow of fig. 26. Note that in the flow of fig. 26, processing after streaming has been started between the DASH client 21 and the VDO 41S is shown.

First, as described above with reference to the flow of fig. 22, the VDO 41S predicts movement of the user terminal 13 from the currently existing source RAN 72S to another target RAN72T (prediction of a transfer to a target ME host in fig. 26).

Then, the VDO 41S on the source ME host 31S requests the workflow manager 62 of the ME platform (orchestrator) 33 to perform VDO 41T on the target ME host 31T (VDO (at target ME host) activation in fig. 26).

In response, ME platform 83T of target ME host 31T attempts resource reservation and execution based on the protocol resource requirements equivalent to the session currently established between DASH client 21 and VDO 41S. However, in this case, the activation of VDO 41T fails.

Assume that after a period of time has elapsed, the VDO 41S on the source ME host 31S detects that the user terminal 13 on which the DASH client 21 is installed has moved via the ME platform 83S and is newly connected to the target RAN72T to which the target ME host 31T is bound (detection of transfer of the DASH client to the target ME host in fig. 26).

In this case, the VDO 41S performs traffic change so that the VDO 41S on the source ME host 31S can receive a streaming request from the target RAN72T after the transfer (traffic change to the source ME host in fig. 26). At the same time, the traffic change to source ME host 31S is performed on ME platform 83T on target ME host 31T.

Thus, even in case of a failure of the activation of VDO 41T on the target ME host 31T, streaming to the DASH client 21 can be achieved via the target RAN72T while maintaining VDO 41S on the source ME host 31S.

A process of performing VDO 41T in each of the target ME host 31T-a and the target ME host 31T-B bound to two cells (the target RAN 72T-a and the target RAN 72T-B) for which transfer is mut mutexpected in the case where the transfer destination cell (RAN 72) cannot be predicted will be described with reference to fig. 27 to 29. Here, the case is shown where the DASH client 21 eventually transfers to the target RAN 72T-a bound to the target ME host 31T-a.

For mutexample, fig. 27 shows the state of inter-base station handoff that accompanies the movement of the DASH client 21 from the source RAN 72S to the target RAN 72T-a (see fig. 28).

That is, in case the DASH client 21 cannot predict to which of the target RAN 72T-a and the target RAN 72T-B the transfer is going, VDO 41T-a is activated in the target ME host 31T-a and VDO 41T-B is activated in the target ME host 31T-B. Then, when a transfer of the DASH client 21 to the target RAN 72T-a is detected, streaming from the VDO 41T-a to the DASH client 21 via the target RAN 72T-a is performed.

The processing executed at this time will be described with reference to the flow of fig. 29. Note that in the flow of fig. 29, processing after streaming has been started between the DASH client 21 and the VDO 41S is shown.

First, as described above with reference to the flow of fig. 22, the VDO 41S predicts movement of the user terminal 13 from the currently mutexisting source RAN 72S to another target RAN 72T-a or 72T-B (prediction of a transition to a target ME host a or B in fig. 29). That is, in this case, it is not possible to predict to which of target RAN 72T-A and target RAN 72T-B the user terminal moved.

The VDO 41S on the source ME host 31S then requests the workflow manager 62 of the ME platform (orchestrator) 33 to mut mutexecute the VDO 41T-A on the target ME host 31T-A and the VDO 41T-B on the target ME host 31T-B (VDO (at target ME hosts A and B) activation in FIG. 29).

In response, ME platform 83T of target ME host 31T attempts resource reservation and execution based on the protocol resource requirements equivalent to the session currently established between DASH client 21 and VDO 41S. Thus, in VDO 41T-A on the target ME host 31T-A, the necessary resources are reserved and activated (VDO resource reservation/generation in FIG. 29). Similarly, in VDO 41T-B on target ME host 31T-B, the necessary resources are reserved and activated (VDO resource reservation/generation in FIG. 29).

Thereafter, ME platform 83T of target ME host 31T requests synchronous VDO processing from VDO 41T-A on target ME host 31T-A and requests synchronous VDO processing from VDO 41T-B on target ME host 31T-B. Thus, VDOs 41T-A and 41T-B may perform VDO processing in synchronization with VDO 41S on the source ME host 31S.

Assume that after a period of time has elapsed, the VDO 41S on the source ME host 31S detects that the user terminal 13 on which the DASH client 21 is installed has moved via the ME platform 83S and is newly connected to the target RAN 72T-a to which the target ME host 31T-a is bound (detection of transfer of the DASH client to the target ME host a in fig. 29).

In response to this, a traffic change is performed so that VDO 41T-a on target ME host 31T-a can receive a streaming request from target RAN 72T-a after the transfer (traffic update to target ME host a in fig. 29). At the same time, a traffic change is performed such that a response from the VDO 41T-A on the target ME host 31T-A reaches the user terminal 13.

Thereafter, the VDO 41T-a on the target ME host 31T-a starts streaming to the DASH client 21 after moving via the target RAN 72T-a (streaming from target ME host a in fig. 29). At this point, the synchronous VDO processing of VDO 41T-A on target ME host 31T-A continues and the synchronous VDO processing of VDO 41T-B on target ME host 31T-B ends.

A process of ensuring fault-tolerant redundancy will be described with reference to fig. 30 to 33. Here, an mut mutexample is shown in which VDO 41T is performed in each of the target ME host 31T-a and the target ME host 31T-B bound to two cells (target RAN 72T-a and target RAN 72T-B) with overlapping coverage, and two streaming sessions are performed synchronously. Note that assume that after the transfer, DASH client 21 is connected to both target RAN 72T-a and target RAN 72T-B at the same time.

For mutexample, fig. 30 illustrates the state of inter-base station handoff that accompanies the movement of a DASH client 21 from the source RAN 72S to the target RAN 72T-a and the target RAN 72T-B (see fig. 31).

That is, in the case of overlapping coverage of target RAN 72T-A and target RAN 72T-B, VDO 41T-A is activated in target ME host 31T-A and VDO 41T-B is activated in target ME host 31T-B. Then, when a transfer of the DASH client 21 to the target RAN 72T-a and the target RAN 72T-B is detected, streaming from the VDO 41T-a to the DASH client 21 via the target RAN 72T-a and streaming from the VDO 41T-B to the DASH client 21 via the target RAN 72T-B is performed.

The processing executed at this time will be described with reference to the flow of fig. 32. Note that the flow of fig. 32 shows the processing after streaming has been started between the DASH client 21 and the VDO 41S.

First, as described above with reference to the flow of fig. 22, the VDO 41S predicts movement of the user terminal 13 from the currently mutexisting source RAN 72S to another target RAN 72T-a or 72T-B (prediction of a transition to a target ME host a or B in fig. 32).

The VDO 41S on the source ME host 31S then requests the workflow manager 62 of the ME platform (orchestrator) 33 to mut mutexecute the VDO 41T-A on the target ME host 31T-A and the VDO 41T-B on the target ME host 31T-B (VDO (at target ME hosts A and B) activation in FIG. 32).

In response, ME platform 83T of target ME host 31T attempts resource reservation and execution based on the protocol resource requirements equivalent to the session currently established between DASH client 21 and VDO 41S. Thus, in the VDO 41T-A on the target ME host 31T-A, the necessary resources are reserved and activated (VDO resource reservation/generation in FIG. 32). Similarly, in VDO 41T-B on the target ME host 31T-B, the necessary resources are reserved and activated (VDO resource reservation/generation in FIG. 26).

Thereafter, ME platform 83T of target ME host 31T requests synchronous VDO processing from VDO 41T-A on target ME host 31T-A and requests synchronous VDO processing from VDO 41T-B on target ME host 31T-B. Thus, VDOs 41T-A and 41T-B may perform VDO processing in synchronization with VDO 41S on the source ME host 31S.

Assume that after a period of time has elapsed, the VDO 41S on the source ME host 31S detects that the user terminal 13 with the DASH client 21 installed thereon moves via the ME platform 83S and is newly connected to the target RAN 72T-a to which the target ME host 31T-a is bound and the target RAN 72T-B to which the target ME host 31T-B is bound (detection of transfer of the DASH client to the target ME hosts a and B in fig. 32).

In response to this, a traffic change is performed so that the VDO 41T-a on the target ME host 31T-a can receive a streaming request from the target RAN 72T-a after the transfer (traffic update to target ME host a in fig. 32). At the same time, a traffic change is performed such that a response from the VDO 41T-A on the target ME host 31T-A reaches the user terminal 13.

Similarly, traffic changes are performed so that the VDO 41T-B on target ME host 31T-B may receive a streaming request from target RAN 72T-B after the transfer (traffic update to target ME host B in FIG. 26). At the same time, a traffic change is performed so that a response from the VDO 41T-B on the target ME host 31T-B reaches the user terminal 13.

Thereafter, the VDO 41T-a on the target ME host 31T-a starts streaming to the DASH client 21 after moving via the target RAN 72T-a (streaming from target ME host a in fig. 32). Furthermore, the synchronous VDO processing of VDO 41T-A on the target ME host 31T-A continues.

Similarly, VDO 41T-B on target ME host 31T-B starts streaming to DASH client 21 after moving via target RAN 72T-B (streaming from target ME host B in fig. 32). Furthermore, the synchronous VDO processing of VDO 41T-B on target ME host 31T-B continues.

For example, as shown in fig. 33, it is assumed that an instruction on the redundant configuration can be given by setting the copy attribute of the application element of the target application to true in the workflow description.

< third information processing example of information processing System >

As a third information processing example of the information processing system 11, variable replication of the state of VDO processing based on traffic prediction will be described with reference to fig. 34 to 38. For example, the third information processing example of the information processing system 11 has a characteristic that when the state of VDO processing before migration is synchronously copied with respect to the VDO41 on the ME host 31 of the migration destination, the flow quality changes based on the traffic prediction and the resource prediction of the ME host 31.

For example, assume that the possibility that session resources equivalent to the session before the transfer cannot be guaranteed is detected in advance in the VDO 72T executed on the ME host 31T bound to the target RAN 41T scheduled to be transferred. In this case, when a change in the stream quality after the transfer is expected, VDO processing (hereinafter, referred to as synchronous adaptation VDO processing request) is performed to generate a segment of the change in quality. Then, in the selection of the target quality, optimization is performed within a limited range based on the MPD or the recommended rate.

Also in this case, the synchronization VDO processing request in the processing in the case where the transfer destination cell (RAN 72) cannot be predicted as described above with reference to fig. 29, the processing for ensuring fault-tolerant redundancy as described above with reference to fig. 32, or the like may be replaced with the synchronization adaptation VDO processing request.

The processing executed at this time is shown in the flow of fig. 34. Note that in the flow shown in fig. 34, similar processing to that of fig. 22 is performed except that the synchronization VDO processing request in the flow of fig. 22 is replaced with the synchronization adaptation VDO processing request, and then the synchronization adaptation VDO processing is performed, and detailed description thereof is omitted.

Fig. 35 is a flowchart for explaining a process of variably copying the state of the VDO process based on the traffic prediction.

For example, in steps S51 and S52, similar processing to that in steps S41 and S42 in fig. 23 is performed. Then, in step S53, the VDO 41S transmits the segment URL, MPD, and VM to the VDO 41T, and requests the sync-adaptation VDO process. Therefore, the VDO 41T performs the synchronization adaptation VDO process in step S55.

Then, in steps S54, S56, and S57, similar processing to that in steps S44, S46, and S47 in fig. 23 is performed.

As described above, when the VDO 41T performs the synchronization-adaptation VDO processing, for example, the state of the VDO processing may be variably duplicated based on the traffic prediction, and for example, streaming may be performed in anticipation of a change in the flow quality after the transition.

Here, the synchronous adaptation VDO processing request will be described.

For example, as described above, the VDO 41S on the source ME host 31S streaming to the transferring DASH client 21 prior to the transfer detects the possibility of the DASH client 21 transferring to the target RAN72T bound to the target ME host 31T. In response, after VDO 41T is activated on the target ME host 31T, a synchronous adaptation VDO process is performed with respect to VDO 41T.

Then, in the synchronous adaptation VDO processing, it is assumed that the flow quality after VDO processing changes under the constraints of an assumed appropriate streaming session in anticipation of a future transfer of DASH client 21 to target RAN72T in view of the current traffic state in target RAN72T and the computational, storage, etc. resource state in target ME host 31T.

Further, there is a possibility that: depending on the current traffic and computation, storage, etc. resource state of the target RAN72T, or depending on the traffic and computation, storage, etc. resource state predicted in the future after the DASH client 21 transfer, the session resources currently established in the source ME host 31S cannot be guaranteed. Thus, in case the environment is worse than the current environment, the VDO processing is performed such that segments of the representation with less resource consumption (e.g. lower bit rate) are generated in the representation of the currently rendered adaptation set. Note that in some cases, representations are selected from a group of representations in some adaptation set, or in other cases, the adaptation set itself is changed. In this way, for example, segments with different stream qualities (higher bit rate or lower bit rate) can be adaptively selected based on the traffic prediction of the branch destination.

Fig. 36 shows a fragment flow when VDO processing is performed with such synchronization adaptation VDO processing request as a trigger.

For example, as shown, assume that the representation used as the render target in the VDO 41S of the source ME host 31S is representation- (high) and the representation with the best attributes for the current or future resource state in the target ME host 31T is representation- (low).

Then, using the synchronization adaptation VDO processing request as a trigger, VDO processing is started based on the differently represented segments of the same adaptation set in the same time period as the currently reproduced segment. That is, according to the synchronization adaptation VDO processing request, in the VDO 41S, VDO processing is performed on the segment SegH having a higher bit rate, and in the VDO 41T, VDO processing is performed on the segment SegL having a lower bit rate. Note that this synchronous adaptation VDO processing request is performed when it is confirmed that a session resource different from that by the current source ME host 31S is ensured in the environment by the target ME host 31T.

Hereinafter, a message transfer protocol for synchronously adapting a VDO processing request will be described.

For example, a messaging protocol for synchronously adapting VDO processing requests from a VDO 41S on a source ME host 31S before transitioning to a VDO 41T on a target ME host 31T that plans to transition may be defined as follows.

That is, the VDO trigger request message is introduced as a message between VDOs 41. For example, the VDO trigger request element has an adapted VDO attribute indicating whether VDO processing is adapted, a viewport metrics frommdashclient attribute for storing a VM from the VDO 41S on the source ME host 31S, and a stream element for specifying a segment of a target stream.

Fig. 37 shows an example of the structure of a VDO trigger request.

For example, the adaptation VDO attribute indicates that normal VDO processing is performed with a false value and that compliance VDO processing is performed with a true value.

Further, for example, in step S52 of fig. 35 described above, the DASH client 21 issues a viewport metrics frommdashclient attribute to the VDO 41S.

Further, the stream element has a reference to the MPD, includes an attribute of a stream to be controlled (URL of the MPD) or an MPD attribute for storing an MPD body, and has a segment path attribute for storing an XPath string indicating a specific segment described in the MPD.

Here, a VDO trigger request message is transferred from the VDO 41S on the source ME host 31S to the VDO 41T on the target ME host 31T by using, for example, HTTP-POST as shown in fig. 38.

For example, FIG. 38 shows that the segment optimized in the viewpoint direction indicated by (viewport metrics) can be changed to a quality (e.g., bit rate, etc.) optimal for the environment of the ME host 31 as the transfer destination, and generated with respect to the segment specified in the segment template element of the initial adaptation set element of the initial epoch element of mpd specified at the URL of http:// a.com/a.mpd.

< overview of VDO processing in edge Server >

An outline of VDO processing in the edge server will be described with reference to fig. 39.

For example, a of fig. 39 shows an outline of VDO processing performed in the conventional information processing system, and B of fig. 39 shows an outline of VDO processing performed in the ME host 31 as an edge server in the information processing system 11 of the present embodiment.

For example, in a conventional information handling system, DASH segments generated by Regional Wrapper (RWP) in the origin server 32 are multicast to the ME hosts 31-1 to 31-3. Then, DASH segments suitable for each state of the DASH clients 21-1 to 21-3 are selected based on the MPD, and the DASH segments are acquired from the ME hosts 31-1 to 31-3 based on each segment URL.

On the other hand, in the information processing system 11, the DASH segments are multicast from the origin server 32 to the ME hosts 31-1 to 31-3. Segments appropriate for the state of each of DASH clients 21-1 through 21-3 are then selected based on the MPD and RWP is performed in each of ME hosts 31-1 through 31-3 based on the segment URLs and VMs. Thus, viewport-optimized DASH segments for each DASH client 21-1 through 21-3 may be generated in the ME hosts 31-1 through 31-3 and retrieved by the DASH clients 21-1 through 21-3, respectively.

Therefore, in the information processing system 11, by performing distribution processing for VR streaming in the ME hosts 31-1 to 31-3, load concentration or occurrence of delay in the origin server 32 can be avoided.

< example of configuration of computer >

Next, the series of processes (information processing method) described above may be executed by hardware or software. In the case where a series of processes are executed by software, a program configuring the software is installed in a general-purpose computer or the like.

Fig. 40 is a block diagram showing a configuration example of an embodiment of a computer in which a program for executing the series of processes described above is installed.

The program may be recorded in advance on the hard disk 105 or the ROM 103 as a recording medium built in the computer.

Alternatively, the program may be stored (recorded) in a removable recording medium 111 driven by the drive 109. Such a removable recording medium 111 may be provided as so-called package software. Here, examples of the removable recording medium 111 include a flexible disk, a compact disc read only memory (CD-ROM), a magneto-optical (MO) disk, a Digital Versatile Disc (DVD), a magnetic disk, and a semiconductor memory.

Note that the program may be installed in the computer from the removable recording medium 111 as described above, or may be downloaded to the computer via a communication network or a broadcast network and installed in the built-in hard disk 105. That is, for example, the program may be wirelessly transmitted from a download site to the computer via an artificial satellite for digital satellite broadcasting, or may be transmitted to the computer via a network such as a Local Area Network (LAN) or the internet by wire.

The computer incorporates a Central Processing Unit (CPU)102, and an input/output interface 110 is connected to the CPU 102 via a bus 101.

When a user operates the input unit 107 or the like via the input/output interface 110 to input a command, the CPU 102 executes a program stored in the Read Only Memory (ROM)103 according to the command. Alternatively, the CPU 102 loads a program stored in the hard disk 105 into a Random Access Memory (RAM)104 and executes the program.

Therefore, the CPU 102 executes the processing according to the above-described flowchart or the processing executed by the configuration of the above-described block diagram. Then, for example, the CPU 102 outputs the processing result from the output unit 106 via the input/output interface 110, or transmits the processing result from the communication unit 108, and further records the processing result in the hard disk 105 as necessary.

Note that the input unit 107 includes a keyboard, a mouse, a microphone, and the like. Further, the output unit 106 includes a Liquid Crystal Display (LCD), a speaker, and the like.

Here, in this specification, the processing executed by the computer according to the program is not necessarily executed in time series in the order described in the flowcharts. That is, the processing executed by the computer according to the program also includes processing executed in parallel or individually (for example, parallel processing or processing by an object).

Further, the program may be processed by one computer (processor), or may be processed in a distributed manner by a plurality of computers. Further, the program may be transferred to a remote computer and executed.

Further, in this specification, a system refers to a collection of a plurality of components (devices, modules (components), and the like), and it does not matter whether all the components are in the same housing. Therefore, both a plurality of devices accommodated in different housings and connected via a network and one device in which a plurality of modules are accommodated in one housing are systems.

Further, for example, a configuration described as one device (or processing unit) may be divided to include a plurality of devices (or processing units). On the contrary, the above-described configuration as a plurality of devices (or processing units) may be configured as one device (or processing unit) in common. Further, a configuration other than the above-described configuration may be added to the configuration of each device (or each processing unit). Further, a part of the configuration of a specific device (or processing unit) may be included in the configuration of another device (or another processing unit) as long as the configuration and operation of the entire system are substantially the same.

Further, for example, the present technology may be configured as cloud computing in which one function is shared by a plurality of devices via a network and collectively processed.

Further, for example, the above-described program may be executed in any device. In this case, it is sufficient if the device has necessary functions (function blocks, etc.) and can obtain necessary information.

Further, for example, each step described in the above flowcharts may be executed by one device or shared by a plurality of devices. Further, in the case where one step includes a plurality of processes, the plurality of processes included in one step may be executed by one device or shared by a plurality of devices. In other words, a plurality of processes included in one step may also be executed as a plurality of steps of processes. On the contrary, the processes described as a plurality of steps may be collectively performed as one step.

Note that in a program executed by a computer, processing in steps describing the program may be executed in time series in the order described in this specification, or may be executed separately or in parallel at necessary timings such as when making a call. That is, as long as there is no contradiction, the processing of each step may be performed in an order different from the above-described order. Further, the processing describing the steps of the program may be executed in parallel with the processing of another program, or may be executed in combination with the processing of another program.

Note that a plurality of the present techniques described in this specification can be independently implemented as long as there is no contradiction. Of course, a plurality of arbitrary prior art techniques can be implemented in combination. For example, some or all of the present techniques described in any embodiment may be implemented in combination with some or all of the present techniques described in other embodiments. Further, some or all of any of the above described present techniques may be implemented in combination with other techniques not described above.

< example of combination of configurations >

Note that the present technology may also have the following configuration.

(1)

An information processing apparatus comprising:

an optimization processing unit that generates an optimized segment from the content multicast from the distribution server by performing optimization processing on a segment corresponding to a request of a client terminal according to a viewpoint direction in the client terminal; and

a transmitting unit that transmits the optimized segment to the client terminal.

(2)

The information processing apparatus according to (1), wherein,

the transmitting unit transmits the content stream to the client terminal via a first network, the apparatus further comprising:

a synchronization optimization processing request unit that requests another information processing apparatus that streams the content to the client terminal via the second network to execute optimization processing synchronized with the optimization processing unit in response to a handover from the first network to the second network that occurs due to movement of the client terminal.

(3)

The information processing apparatus according to (2), wherein,

the synchronization optimization processing request unit transmits, to the other information processing apparatus, information for specifying the segment, an MPD (media presentation description) which is a file describing metadata of the content, and viewpoint direction information indicating a viewpoint direction in the client terminal, and performs control to copy a processing state in the optimization processing unit.

(4)

The information processing apparatus according to (3), wherein,

the viewpoint direction information is attached to the request of the segment and transmitted from the client terminal.

(5)

The information processing apparatus according to (4), wherein,

the client terminal notifies the viewpoint direction information to the optimization processing unit by using a URL parameter defined in MPEG-DASH (MPEG dynamic adaptive streaming over HTTP).

(6)

The information processing apparatus according to any one of (2) to (5), wherein,

the synchronization optimization processing request unit changes the stream quality of the content before and after the switching when requesting the other information processing apparatus to execute the optimization processing.

(7)

The information processing apparatus according to (6), wherein,

the synchronization optimization processing request unit changes the streaming quality of the content based on a traffic prediction in the network after the handover or a resource prediction of the another information processing apparatus.

(8)

An information processing method for an information processing apparatus, comprising:

generating an optimized segment from the content multicast from the distribution server by performing optimization processing on a segment corresponding to a request of a client terminal according to a viewpoint direction in the client terminal; and

and sending the optimized segments to the client terminal.

(9)

A program for causing a computer of an information processing apparatus to execute information processing, comprising:

generating an optimized segment from the content multicast from the distribution server by performing optimization processing on a segment corresponding to a request of a client terminal according to a viewpoint direction in the client terminal; and

and sending the optimized segments to the client terminal.

Note that the present embodiment is not limited to the above-described embodiment, and various modifications may be made without departing from the scope of the present disclosure. Further, the effects described in this specification are merely examples, not limitations, and other effects may be provided.

List of reference numerals

11 information processing system

12 cloud

13 user terminal

21 DASH client

31 ME host

32 origin server

33 ME platform (orchestrator)

41 VDO

42 database storage unit

43 memory cell

61 database storage unit

62 workflow manager

715G core network system

72 Access network

81 data plane

82 application program

83 ME platform

84 UPF

62页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:传感器装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类