Method and network node for enabling a content delivery network to handle unexpected traffic surges

文档序号:1510742 发布日期:2020-02-07 浏览:5次 中文

阅读说明:本技术 使内容传送网络能够处理非预期流量激增的方法和网络节点 (Method and network node for enabling a content delivery network to handle unexpected traffic surges ) 是由 A·拉腊比 J·T·D·沃 于 2017-06-20 设计创作,主要内容包括:本公开涉及内容传送网络以及方法和网络节点,所述方法和网络节点使所述内容传送网络能够通过计算和使用对至少一个传送节点的内容的流量预测来处理非预期流量激增。(The present disclosure relates to a content delivery network and a method and a network node enabling the content delivery network to handle unexpected traffic surges by calculating and using traffic predictions for the content of at least one delivery node.)

1. A method performed in a network node for providing traffic prediction for content of a transmitting node in a content transmission network, the method comprising:

-obtaining an initial state of the transmitting node and setting a current state to the initial state;

-calculating the traffic prediction for the content of the transmitting node based on the current state; and

-providing the traffic prediction for the content of the transmitting node to a second network node.

2. The method of claim 1, wherein obtaining the initial state of the transmitting node comprises:

-obtaining configuration, performance indicators and analysis reports from the corresponding traffic analyzers; and

-subscribe to configuration updates, performance indicator updates and analysis report updates from the corresponding traffic analyzer.

3. The method of claim 1, wherein obtaining the initial state of the transmitting node comprises:

-obtaining configuration, performance indicators and analysis reports from a configuration node, a monitoring node and an analysis node, respectively; and

-subscribe to configuration updates, performance indicator updates and analysis report updates from the configuration node, the monitoring node and the analysis node, respectively.

4. The method of claim 1, wherein obtaining the initial state comprises: obtaining the initial states of a plurality of transmitting nodes.

5. The method of claim 1, wherein calculating the traffic prediction is based on static account provisioning, dynamic account provisioning, predicted account provisioning, and measured and predicted traffic at the transmitting node.

6. The method of claim 4, wherein providing the traffic prediction of the content comprises: providing the traffic prediction for the plurality of transmitting nodes, wherein the traffic prediction is provided to a requesting router.

7. The method of claim 6, wherein the traffic prediction is provided to a plurality of requesting routers.

8. The method of claim 1, further comprising: receiving a configuration update, a performance index update, and an analysis report update, and updating the current state.

9. The method of claim 6, wherein the current state further comprises a traffic state, wherein the method further comprises:

-receiving a redirect request update from the request router; and

-updating the traffic status with information related to the redirect request.

10. The method of claim 8, wherein the performance index updates occur approximately once per second and the analysis report updates occur approximately once every five minutes.

11. The method of claim 9, wherein the redirect request update occurs continuously.

12. The method of claim 9, wherein the flow state is stored in a flow meter.

13. A method performed in a network node for handling requests for content in a content delivery network, the method comprising:

-receiving said request for said content from a client;

-for at least one of a plurality of transmitting nodes, obtaining a traffic prediction for the content;

-in response to obtaining the traffic prediction for the content, selecting one of the plurality of delivery nodes for providing the content to the client; and

-sending metadata associated with the request to the second network node.

14. The method of claim 13, further comprising: subscribing to a health check service, subscribing to a performance index update, and subscribing to a traffic prediction update for the plurality of delivery nodes.

15. The method of claim 14, further comprising:

-receiving and storing performance indicators; and

-updating a transmitting node blacklist in accordance with the performance indicator.

16. The method of claim 14, further comprising:

-receiving a service status from the health check service; and

-updating a transmitting node blacklist according to the service status.

17. The method of claim 14, further comprising: after receiving the traffic prediction from a traffic analyzer:

-updating a transmitting node blacklist in accordance with the traffic prediction.

18. The method of claim 13, wherein the network node is a request router and the second network node is a traffic analyzer.

19. The method of claim 15, wherein the performance indicator is received approximately every ten seconds.

20. The method of claim 16, wherein the service status is received about once per second.

21. The method of claim 17, wherein the flow prediction is received continuously.

22. The method of claim 13, wherein selecting the transmitting node further comprises:

-locating a cluster of transfer nodes based on client proximity;

-discarding a transmitting node from the cluster of transmitting nodes if the transmitting node is listed in a transmitting node blacklist;

-applying a content based request routing algorithm to select the transmitting node; and

-redirecting the request from the client to the selected transfer node.

23. A traffic analyser for providing traffic predictions for content of a delivery node in a content delivery network, the traffic analyser comprising processing circuitry and a memory, the memory containing instructions executable by the processing circuitry whereby the traffic analyser is operable to:

-obtaining an initial state of the transmitting node and setting a current state to the initial state;

-calculating the traffic prediction for the content of the transmitting node based on the current state; and

-providing the traffic prediction for the content of the transmitting node to a second network node.

24. A request router for handling requests for content in a content delivery network, the request router comprising processing circuitry and a memory, the memory containing instructions executable by the processing circuitry, whereby the request router is operable to:

-receiving said request for content from a client;

-for at least one of a plurality of transmitting nodes, obtaining a traffic prediction for the content;

-in response to obtaining the traffic prediction for the content, selecting one of the plurality of delivery nodes for providing the content to the client; and

-sending metadata associated with the request to the second network node.

25. A content delivery network for providing content to a client and capable of handling unexpected traffic surges, the content delivery network comprising:

-at least one flow analyzer according to claim 23; and

-at least one request router according to claim 24, wherein the at least one request router continuously receives traffic predictions from the at least one traffic analyzer.

Technical Field

The present disclosure relates to a content delivery network and a method and network node for enabling a content delivery network to handle unexpected traffic surges.

Background

A content delivery network or Content Delivery Network (CDN)10, such as the network shown in fig. 1, has been developed and used to meet the ever-increasing business demands of efficiently delivering disparate content to end users. Due to the increasing growth of Television (TV)/video consumption (e.g., over-the-top (OTT) content) on the internet, there is a need to further develop such networks.

As a value added network, CDN 10 builds on top of the internet to improve Round Trip Time (RTT), which means quality of experience (QoE) when delivering any type of content to end users. This transfer is mainly coordinated by 3 types of entities or functions:

a transit node (DN)70, which is a replica or proxy server deployed in a different location near the end user 40 to provide a High Availability (HA) and high performance network;

a control plane 50 for configuring, monitoring and operating the network of transmitting nodes;

request Routers (RR)20, as a key function, continuously compute the cost of delivery from different delivery nodes to the end-user from different perspectives (proximity, load, operating conditions, content affinity, etc.) (based on different factors in the control plane).

Two main different request routing mechanisms are used in CDNs, one is the hypertext transfer protocol (HTTP) and the other is based on Domain Name Servers (DNS). The following are some characteristic parities (featureparities) between these routing mechanisms:

Figure BDA0002325518630000011

Figure BDA0002325518630000021

TABLE 1

HTTP-RR is explicit and HTTP-RR is more transparent than DNS-RR. The HTTP-RR mechanism is more frequently employed for video delivery (real-time, VOD, etc.) because it is not only richer in characteristics, but also provides more control and efficiency in content delivery.

Referring to fig. 2, using the HTTP-RR mechanism, RR20 has two methods to determine the best DN 70 to use to service an end user request:

-selecting a DN that is running normally from the cluster located at the area where the end user originated the request, in a round robin fashion.

-selecting a DN with sticky properties, i.e. preferably based on content affinity, selecting the same DN as previously selected, which is called Content Based Request Routing (CBRR).

The latter is more suitable for Video On Demand (VOD) delivery because it efficiently uses storage when caching content. Furthermore, it can also scale smoothly due to the nature of VOD traffic (i.e., low concurrency).

When the audience size grows steadily, real-time video traffic is also suitable for CBRR, which translates into low initial concurrency delivery and traffic.

Turning to fig. 3, if there is a group of users S1 streaming real time channel C1 on the transmitting node DN1, then as a result of applying the CBRR algorithm, a new group of S2 from the same area will also be routed to the DN 1. Because a session already exists on DN1, any new request will join the current session. This eliminates the next hop DN and/or overhead on the originating server 80 (fig. 2). This efficiency is ensured by the RR until monitoring detects an increase in resource utilization reaching an upper threshold and overload protection is triggered. This results in the RR becoming an extended transmitting node where the second DN will be selected to assist DN1 in transmitting C1. The scaling principle is applied until there are no more resources in the network. Curtailment only begins when the flow begins to decrease and reaches a lower threshold.

For real-time flash events, such as sporting events (e.g., football games), music shows, or some breaking news, the feedback from the monitoring is often too late to be extended. At instance t1, the RR has decided to route the end user to DN1, which currently has a low processing load. However, sending such a large number of users to DN1 at the same time may produce an immediate overload on the node at t 2. After the overload is notified by the monitoring service, the RR will of course expand and the overload effect will propagate to the next selected transmitting node. However, the end user experience that has been assigned to overloaded DN1 at this time has been impacted: this will eventually clog (image quality degrades and visual artifacts may occur) the media player either due to streaming minimum configuration (bit rate reduction) or suffer from large delays.

The above makes it difficult to use CBRR for real-time channels during large events, thus making round-robin selection seem a more suitable choice. However, looping would mean that the same channel of transmissions is allocated across all DNs of the regional cluster during a flash event, thus resulting in more traffic and sessions on the next hop and originating server.

Disclosure of Invention

There is provided a method performed in a network node for providing traffic prediction for content of a transmitting node in a content transmission network. The method comprises the following steps: obtaining an initial state of the transmitting node and setting a current state to the initial state; calculating the traffic prediction for the content of the transmitting node based on the current state; and providing the traffic prediction for the content of the transmitting node to a second network node.

A method performed in a network node for handling a request for content in a content delivery network is also provided. The method comprises the following steps: receiving the request for the content from a client; for at least one of a plurality of transmitting nodes, obtaining a traffic prediction for the content; in response to obtaining the traffic prediction for the content, selecting one of the plurality of delivery nodes for providing the content to the client; and sending metadata associated with the request to the second network node.

There is provided a traffic analyzer for providing traffic predictions for content of a delivery node in a content delivery network, the traffic analyzer comprising processing circuitry and a memory. The memory contains instructions executable by the processing circuitry whereby the traffic analyzer is operable to: obtaining an initial state of the transmitting node and setting a current state to the initial state; calculating the traffic prediction for the content of the transmitting node based on the current state; and providing the traffic prediction for the content of the transmitting node to a second network node.

A request router is provided for processing requests for content in a content delivery network, the request router comprising processing circuitry and a memory. The memory contains instructions executable by the processing circuitry whereby the request router is operable to: receiving the request for content from a client; for at least one of a plurality of transmitting nodes, obtaining a traffic prediction for the content; in response to obtaining the traffic prediction for the content, selecting one of the plurality of delivery nodes for providing the content to the client; and sending metadata associated with the request to the second network node.

A content delivery network is provided for providing content to clients and capable of handling unexpected traffic surges. The content delivery network comprises at least one traffic analyser as described above and at least one request router as described above. The at least one request router continuously receives traffic predictions from the at least one traffic analyzer.

Drawings

FIG. 1 is a schematic diagram of a CDN high level architecture;

FIG. 2 is a schematic diagram showing a request router in the CDN;

FIG. 3 is a schematic diagram of an example problem;

FIG. 4 is a diagram illustrating a Transaction Per Second (TPS) spike caused by a flash, according to one embodiment;

FIG. 5 is a schematic diagram of an example embodiment including a traffic analyzer;

FIG. 6 is a schematic diagram of an embodiment that addresses the exemplary problem shown in FIG. 3;

FIG. 7 is a flow chart of an example embodiment;

FIG. 8 is a schematic diagram of an alternative example embodiment;

FIG. 9 is a flow diagram of a method performed by a traffic analyzer, according to one embodiment;

FIG. 10 is a flow diagram of a method performed by a request router, according to one embodiment;

fig. 11 is a schematic diagram of a network node according to an embodiment;

fig. 12 is a schematic diagram of a cloud environment in which certain embodiments of traffic analyzers and request routers may be deployed.

Detailed Description

Various features and embodiments will now be described with reference to the drawings to fully convey the scope of the disclosure to those skilled in the art.

Many aspects will be described in terms of sequences of actions or functions. It will be recognized that in some embodiments, certain functions or actions may be performed by specialized circuits, by program instructions being executed by one or more processors, or by a combination of both.

Furthermore, some embodiments may be partially or fully embodied in the form of a computer readable carrier or carrier wave containing a suitable set of computer instructions that would cause a processor to perform the techniques described herein.

In some alternative embodiments, the functions/acts may occur out of the order noted in the sequence of acts or concurrently. Moreover, in some figures, certain blocks, functions, or acts may be optional and may or may not be performed; these blocks, functions or acts are generally illustrated using dashed lines.

In short, the solution to the above-mentioned problem shown in fig. 4 is to quickly and intelligently detect flash events and process them, thus ultimately avoiding overloading the transport network and affecting the user QoE. In this solution, the CBRR is used in a dynamic manner to have a positive impact on the video transmission.

Turning to fig. 5, it is proposed to add a traffic prediction service, using a traffic analyzer 30 and an analysis component 60, on top of the existing traffic KPIs collected from the DN 70, which records not only the density of user requests newly arriving at the RR20, but also its associated service provisioning metadata. This in turn provides RR20 with the ability to calculate and predict traffic load in the DN before making a redirection decision.

The real-time nature of the prediction service should compensate for the gap time between KPI collection intervals. The flash scenario should be detected at the beginning of the flash scenario, allowing the RR to respond in the most proactive way in order to optimize the CBRR for efficient utilization of CDN resources.

Introducing traffic load prediction capability in the RR should help the RR to smoothly handle flash events without compromising the end user streaming experience.

Fig. 6 shows an example flash event with 10,000 simultaneous requests for the same channel or content. Based on the costs computed from different dimensions (e.g., throughput, CPU, latency.) for each DN, the RRs are better equipped to use CBRRs more intelligently and split the 10,000 sessions between two DNs, in addition to the predicted costs newly obtained based on the service levels defined for these newly requested services.

The proposed solution (to be described in further detail below) should not only benefit CDN utilization, but also end user experience, prevent DN from overloading resources due to sudden surges in traffic, and protect end users from suffering poor quality of service. Furthermore, DNS-RR based traffic (e.g., Web content) should also benefit from cost prediction added to the weighted round robin feature. The solution should also allow to eliminate the need to roll back from CBRR to basic round robin approach, which increases the load on the originating server.

The RR with predictive routing service will now be described in more detail with reference to the example shown in fig. 7, which is an example of an overall sequence of operations in the control plane with interactions between different components in order to support flash or unexpected traffic surge events.

As previously described, today's RR20 operates in a stateless mode. The RR decision capability is based on data collected from past/ongoing traffic on the DN 70. The solution proposed herein transitions RR20 from passive mode to active mode by introducing two levels of intelligence: machine learning assistance, which processes previously collected big data; and tracks new traffic redirected to each DN 70. By this method, the RR has the following capabilities: the ongoing traffic is calculated and the new traffic load is predicted before deciding to which DN to redirect incoming requests from clients 40.

In initial steps 701 to 715, the requesting router 20 obtains information from the various nodes, i.e., Configuration (CFG)58, geography/policy 52, monitoring KPIs 56, health check 54 and traffic analyzer 30, and registers updates from these network nodes. The configuration data requested and received from the CFG service node 58 (step 701-702) may include, for example, CDN topology (network, DN, IP grid …) and service provisioning (real-time/video-on-demand, HDS/HLS …) configurations. The geographic/policy data requested and received from the geographic/policy node 52 (step 704 and 705) relates to a client access policy associated with an Internet Protocol (IP) geographic location. The monitoring KPI data requested and received from the monitoring node 56 (step 707 and 708) may comprise data such as Central Processing Unit (CPU), throughput and latency data for each DN, for example. The health check data requested and received from the health check node 54 (steps 710 and 711) involves continuous monitoring of the DN's service availability (e.g., network connectivity, port reachability, etc.). The traffic data requested and received from the traffic analyser node 30 (step 713 and 714) is the traffic shape of the initial snapshot of the predicted traffic when the traffic analyser starts up. The RR is then ready to process the client request (step 716).

At step 717, the configuration is changed in the CFG service 58. The RR is updated at step 718 and the traffic analyzer 30 is updated at step 719. The updated information sent from the CFG node to the RR and traffic analyzer 30 may include, for example, information described further below with respect to table 2. At step 720, the RR processes the CFG change, i.e., updates the configuration information stored locally by the RR. This renewal occurs approximately once per hour.

In step 721, GeoIP is changed. For example, a new IP range may be added. The RR is updated (step 722) and the GeoIP change is processed (step 723), e.g., by storing the new IP range in a local copy of the memory of the RR. This updating occurs approximately once per week.

At step 724, there is a KPI update based on DN traffic processing. Update information is sent from the monitoring node 56 to the RR20 at step 725 and to the traffic analyser 30 at step 726, which may include information such as that described further below with respect to table 2. This update occurs on an approximate second basis and can occur every few seconds, for example every ten seconds.

At step 727, the DN run status check is changed according to the network status of the traffic load. At step 728, health check node 54 updates RR20 with the updated DN state. This update occurs on an approximate second basis. At step 729, the RR processes the DN status change. For example, the RR updates the black and white lists of DNs. Using the black and white lists, the RR is able to redirect incoming user requests only to DNs that are functioning properly.

The analysis node 60 runs the scheduled analysis report (step 730) and updates the traffic analyzer accordingly approximately once per minute (step 731). Running the analysis report may include measuring the number of requests and transactions per second through the DN, as well as measuring the packet size and the cache length (i.e., the maximum lifetime header value provided by each account). The analysis report may be based on, for example, information described further below with respect to table 2.

In response to receiving the analysis report, Traffic Analyzer (TA)30 calculates a traffic cost for each DN based on the provided traffic (step 732), and updates RR20 with the traffic cost. Such updates occur on the order of milliseconds. RR20 then processes the traffic cost change (steps 734 to 735). The RR aggregates the predicted traffic load for each DN based on requests each having a weight determined based on different dimensions (e.g., CPU, bandwidth, and latency requirements) (step 735). The RR then evaluates the high and low scores (step 736), i.e., the RR is used to blacklist the DN into a black list and white list upper and lower KPI thresholds (bandwidth, CPU …), i.e., blacklisted when the DN exceeds the high score and whitelisted only when the DN is below the low score. This will prevent jitter effects on the DN. The RR updates its reserved DN blacklists and whitelists based on the predicted traffic (steps 737 and 738).

At this point, RR20 is ready to receive a new client request (step 739). In the example of FIG. 7, the client request is for a media manifest that includes information about content such as mp4 segments, quality levels, and the like. The RR processes the request by performing URL translation, account, policy, token validation, proximity screening (step 740). This step is basically a normalization of the RR against the CDN, to handle the request by translating the URL to be unique in the CDN, and to grant access by checking the token and applying policies based on account configuration. The RR is then ready to select a DN for the service request (step 741). The selection of the DN may be made using CBRR, but other algorithms may also be used, as will be apparent to those skilled in the art.

At step 742, the RR sends the request data (selected DN, URL, account provisioning information) to the traffic analyzer. Account provisioning may be defined as a service canvas that holds a particular feature configuration associated with a service level agreement, e.g., for provisioning a particular device (e.g., for use with a particular device)

Figure BDA0002325518630000081

Device) account of the content provider (e.g., canadian radio) conducting HLS delivery. In response, the traffic analyzer predicts traffic costs based on the model (account provisioning metadata, which includes characteristics of the content provider's content) (step 743), and aggregates the cost per DN (e.g., CPU, bandwidth, and latency costs) based on the measured KPIs and the predicted traffic (step 744).

At step 745, the RR sends a temporary redirect to the client, and at step 746, the client requests a media manifest from node DN 1.

The DN processes the request, in steps 747 to 752, where it checks if the manifest is cached locally, in step 748, the DN performs URL conversion (from the external path to the internal path to the CDN), in step 749, the RR sends an HTTP request for the media manifest to the originating server 80, in step 750, and the RR receives a response with the manifest, in step 752. The DN sends the list to the client at step 753. The client may then begin playing the requested content.

Fig. 8 shows how the traffic analyser functions may operate in conjunction, i.e. multiple traffic analyser nodes working together. Dashed lines 101-110 illustrate the data flow between the nodes and correspond to the previously explained data flow. For example, the analysis node 60 feeds 101, 102 the analysis reports to the traffic analyser. The traffic analyser 30 may provide data 110 to a second (possibly redundant) traffic analyser 30. The requesting router sends the request data 106, 107 to the traffic analyser, and the traffic analyser provides traffic predictions 108, 109 to RR 20. The health check node 54 provides data 112 to the RR20 and the KPI monitoring node 56 provides KPI data 103, 104, 105 to the TA30 and RR 20.

To ensure fast processing speed in RR20, the data predicted by TA30 should be persistently stored in memory and should be available for instantaneous retrieval. Thus, in one embodiment, it is proposed to have two TAs 30 operating in tandem. Both TAs can register the received updates from the KPI monitor 56 and the redirect request data from all RRs (see step 742 of fig. 7). If one TA fails, the other TA will keep the information flowing and provide the most up-to-date data to all active RRs and failed TAs (when it resumes being enabled).

The flow analyzer interoperation is now explained. The TA can base its flow prediction on inputs from different components:

-CFG: it provides some static metadata of Account Offerings (AO);

-KPI monitor: it provides KPI metrics collected from all the running DNs;

-analysis: machine learning and prediction data about user traffic constructed for each AO and DN; and

-RR: a newly redirected client request.

Throughout operation, RR and TA will continuously exchange feedback data with each other:

in each HTTP redirect, the RR updates the TA with client dynamic data: to which DN the request is redirected, what the service is provided to the client, what the video the client is to stream, etc.;

by aggregating user data constantly pushed from all RRs and normalizing them on top of periodically updated KPIs and analysis, the TA will be able to view the entire data plane. This valuable information is then fed back to all RRs for real-time traffic redirection.

The flow update frequency during TA update can be calculated as follows:

fTA=1000/(RRR/RDN)

wherein:

RRR: TPS per RR

RDN: TPS per DN

fTA: frequency of TA issuance (milliseconds)

For the rest of the document, it will be assumed that fTAHas a value of 50 milliseconds, but of course this value may be different.

The prediction function in TA is based on a formula with a combination of static, dynamic, measured and predicted inputs from four different components at different time intervals, as previously described:

Figure BDA0002325518630000101

TABLE 2

Using the inputs listed, for example, in table 2, the TA can predict the traffic load in the DN using the following formula:

FTA=Fao(S,D,P)+Fdn(M,P)

wherein:

FTA: flow analyzer function

Fao: account provisioning function

Fdn: transfer node function

S: static KPI

D: dynamic KPI

M: measuring KPI

P: predicting KPI

In an example embodiment, different KPIs may be defined and/or have, for example, the following values:

s, static KPI: AccountOfferingID 123, ContentType HSS, ServiceType real-time, CachingType memory.

D, dynamic KPI: accountOfferingID 123, URLparth 1/qualitylvel …, SelectdDN 1.

P, predicted KPI: accountOfferring ID 123, SelectdDN 1, AvgSize 1612345, TPS 600.

M, measuring KPI: selected DN-DN 1, CPU 45, throughput 4123456789, and delay 123.

The account provisioning function may be defined as: ((a/AvgSize) + (b/ContentType) + (c/ServiceType) + (d/CachinGTType)), wherein: a + b + c + d equals 100%.

The transfer node function may be defined as: (CPU/TPS; throughput/TPS), where CPU cost is CPU/TPS and throughput cost is throughput/TPS; cost refers to both CPU cost and throughput cost.

Of course, in other embodiments, the functions may be defined differently depending on system design and requirements, as will be apparent to those skilled in the art.

Turning to fig. 9, a method performed in a network node for providing traffic prediction for content of a transmitting node in a content delivery network is shown. The method comprises the following steps: obtaining an initial state of a transmitting node and setting a current state to the initial state (step 901); computing a traffic prediction for the content of the transmitting node based on the current state (step 910); and providing the traffic prediction for the content of the transmitting node to the second network node.

In the method, obtaining the initial state of the transmitting node may include: obtaining a configuration from a corresponding traffic analyzer (step 903), a performance index (step 904), and an analysis report (step 905); and subscribe to configuration updates, performance indicator updates, and analysis report updates from the corresponding traffic analyzers (steps 906-908).

In the method, obtaining the initial state of the transmitting node may include: obtaining configuration, performance indexes and analysis reports from a configuration node, a monitoring node and an analysis node respectively; and subscribing to configuration updates, performance metric updates, and analysis report updates from the configuration node, the monitoring node, and the analysis node, respectively (step 906- "908).

Obtaining the initial state may include: initial states of a plurality of transmitting nodes are obtained.

The traffic prediction may be based on static account provisioning, dynamic account provisioning, predicted account provisioning, and measured and predicted traffic at the transmitting node.

In the method, providing a traffic prediction for the content may include: traffic predictions for a plurality of transmitting nodes are provided, which may be provided to the requesting router (step 911).

The traffic prediction may be provided to a plurality of requesting routers.

The method may further comprise: receive configuration updates, performance index updates, and analysis report updates (steps 915 and 917), and update the current state (steps 916 and 918).

The current state may further include a traffic state, and the method may further include: receiving a redirect request update from the request router (step 919); and updating the traffic status with information related to the redirect request (step 920).

The performance indicator update may occur approximately once per second, or may alternatively occur once every ten seconds (step 912), and the analysis report update may occur approximately once every five minutes (step 913). Redirection request updates may occur continuously (step 914). The flow state may be stored in a flow meter. Steps 910 through 920 are executed in a loop, and steps 909 and 921 may be executed, for example, once every 50 milliseconds (step 922).

Turning to fig. 10, a method performed in a network node for handling a request for content in a content delivery network is shown. The method comprises the following steps: receiving a request for content from a client (step 1015); obtaining a traffic prediction for the content for at least one of the plurality of transmitting nodes (step 1013); in response to obtaining a traffic prediction for the content, selecting one of a plurality of delivery nodes for providing the content to the client (step 1022); and sending metadata associated with the request to the second network node (step 1024).

The method may further comprise: for multiple delivery nodes, subscribe to a health check service, subscribe to performance indicator updates, and subscribe to traffic prediction updates (steps 1001, 1002, and 1003).

The method may further comprise: receiving and storing performance indicators (steps 1008 and 1009); and updating a transmitting node blacklist based on the performance indicator (step 1010).

The method may further comprise: receiving a service status from the operation condition checking service (step 1011); and updating the transmitting node blacklist according to the service status (step 1012).

The method may further comprise: after the step of receiving traffic predictions from the traffic analyzer (step 1013), the transmitting node blacklist is updated according to the traffic predictions (step 1014).

The network node may be a request router and the second transfer node may be a traffic analyser. The performance indicator may be received approximately every ten seconds (step 1005). The service status may be received approximately once per second (step 1006). The flow predictions may be received continuously (step 1007).

In the method, selecting the transmitting node may further include: access policy validation (step 1016), locating a cluster of transmitting nodes based on client proximity (step 1017); discarding the transmitting node from the cluster if the transmitting node is listed in the transmitting node blacklist (steps 1019 to 1020); applying a content-based request routing algorithm to select a transmitting node (step 1021); and redirecting the request from the client to the selected transfer node (step 1023). Steps 1005 to 1024 are executed in a loop (step 1004).

Referring to fig. 11, which illustrates the basic components of a network node, a traffic analyzer 30 is shown for providing traffic predictions for content of a transfer node 70 in a content transfer network 10. The traffic analyzer 30 includes processing circuitry 1100 and memory 1110. The memory 1110 contains instructions executable by the processing circuitry 1100 whereby the traffic analyzer 30 is operable to perform the aforementioned methods, including: obtaining an initial state of a transmitting node and setting a current state as the initial state; calculating a traffic prediction for the content of the transmitting node based on the current state; and providing the traffic prediction for the content of the transmitting node to the second network node.

Still referring to fig. 11, which illustrates the basic components of a network node, alternatively, a request router 20 is shown for handling requests for content in the content delivery network 10. The request router comprises a processing circuit 1100 and a memory 1110, the memory 1110 containing instructions executable by the processing circuit 1100, whereby the request router 20 is operable to perform the method described hereinbefore, comprising: receiving a request for content from a client; obtaining, for at least one of the plurality of transmitting nodes 70, a traffic prediction for the content; in response to obtaining a traffic prediction for the content, selecting one of a plurality of delivery nodes for providing the content to the client; and sending metadata associated with the request to the second network node.

Fig. 11 shows components of a network node. In certain embodiments, traffic analyzer 30 or request router 20 takes the form of a physical network node that includes processing circuitry 1100, memory 1110, and transceiver 1120.

Fig. 11 is a block diagram of a network node (e.g., TA30 or RR 20) suitable for implementing aspects of the embodiments disclosed herein. The network node comprises a communication interface 1120, which may also be referred to as a transceiver. Communication interface 1120 typically includes analog and/or digital components for sending and receiving communications to and from mobile devices within the wireless coverage area of a network node, as well as sending and receiving communications to and from other network nodes, either directly or via a content delivery network. Those skilled in the art will appreciate that the block diagram of the network node necessarily omits many features that are not necessary for a complete understanding of the present disclosure.

Although not shown in all detail, the network node includes one or more general or special purpose processors or processing circuits 1100, or other microprocessors, which are programmed using suitable software programming instructions and/or firmware to perform some or all of the functions of the network node described herein. Additionally or alternatively, the network node may include various digital hardware blocks (e.g., one or more Application Specific Integrated Circuits (ASICs), one or more off-the-shelf digital or analog hardware components, or a combination thereof) (not shown) configured to perform some or all of the functions of the network node described herein. The processing circuit 1100 may use a memory 1110, such as a Random Access Memory (RAM), to store data and programming instructions that when executed by the processing circuit 1100 implement all or part of the functionality described herein. The network node may also include one or more storage media (not shown) for storing data necessary and/or suitable for implementing the functionality described herein, as well as for storing programming instructions that, when executed on the processing circuit 1100, implement all or part of the functionality described herein. One embodiment of the disclosure may be implemented as a computer program product stored on a computer-readable storage medium, comprising programming instructions configured to cause the processing circuit 1100 to perform the steps described herein.

Referring again to fig. 5, a content delivery network is provided for providing content to clients and capable of handling unexpected traffic surges. The content delivery network comprises at least one traffic analyser 30 as described above and at least one request router 20 as described above, wherein the at least one request router continuously receives traffic predictions from the at least one traffic analyser.

Turning to FIG. 12, a schematic block diagram is provided that illustrates a virtualization environment 1200 in which functionality implemented by certain embodiments may be virtualized. As used herein, virtualization may apply to a traffic analyzer or request router, and relates to an embodiment in which at least a portion of functionality is implemented as one or more virtual components (e.g., via one or more applications, components, functions, virtual machines executing on one or more physical processing nodes in one or more networks).

In some embodiments, some or all of the functionality described herein may be implemented as virtual components executed by one or more virtual machines implemented in one or more virtual environments 1200 hosted by one or more hardware nodes 1230. Furthermore, in some embodiments, the network nodes may be fully virtualized.

These functions may be implemented by one or more applications 1220 (which may alternatively be referred to as software instances, virtual devices, network functions, virtual nodes, virtual network functions, etc.), which applications 1220 are operable to implement steps of certain methods according to certain embodiments. The application 1220 runs in the virtualized environment 1200, and the virtualized environment 1200 provides hardware 1230 including processing circuitry 1260 and memory 1290. The memory 1290 contains instructions 1295 that are executable by the processing circuit 1260 such that the application 1220 is operable to provide any of the associated features, advantages, and/or functions disclosed herein.

Virtualization environment 1200 includes general purpose or special purpose network hardware devices 1230, and these general purpose or special purpose network hardware devices 1230 include a set of one or more processors or processing circuits 1260, which processors or processing circuits 1260 may be commercially available off-the-shelf (COTS) processors, Application Specific Integrated Circuits (ASICs), or any other type of processing circuit including digital or analog hardware components or special purpose processors. Each hardware device may include memory 1290-1, which may be non-persistent memory for temporarily storing instructions 1295 or software for execution by processing circuit 1260. Each hardware device may include one or more network interface controllers 1270 (NICs) (also referred to as network interface cards) that include a physical network interface 1280. Each hardware device may also include a non-transitory, persistent machine-readable storage medium 1290-2 in which software 1295 and/or instructions executable by the processing circuit 1260 are stored. Software 1295 may include any type of software, including software that instantiates one or more virtualization layers 1250 (also referred to as a hypervisor), software that executes virtual machine 1240, and software that allows the functions described herein for certain embodiments to be performed.

The virtual machine 1240 includes virtual processes, virtual memory, virtual networks or interfaces, and virtual storage, and may be run by a corresponding virtualization layer 1250 or hypervisor. Different embodiments of instances of virtual device 1220 may be implemented on one or more virtual machines 1240 and embodiments may be produced in different ways.

During operation, processing circuit 1260 executes software 1295 to instantiate a hypervisor or virtualization layer 1250, virtualization layer 1250 sometimes referred to as a Virtual Machine Monitor (VMM). Virtualization layer 1250 can present a virtual operating platform that appears to virtual machine 1240 as network hardware.

As shown in fig. 12, hardware 1230 may be a stand-alone network node with general or specific components. Hardware 1230 may include antennas 12225 and may implement certain functions via virtualization. Alternatively, hardware 1230 may be part of a larger hardware cluster (e.g., in a data center or Customer Premises Equipment (CPE)) in which many hardware nodes work together and are managed via management and coordination (MANO) 12100. Where the MANO 12100 monitors the lifecycle management of the application 1220.

In some contexts, virtualization of hardware is referred to as Network Function Virtualization (NFV). NFV may be used to integrate many network device types into industry standard mass server hardware, physical switches, and physical storage, which may be located in data centers as well as client devices.

In the context of NFV, virtual machine 1240 is a software implementation of a physical machine that runs programs as if the programs were executing on a non-virtualized physical machine. Each virtual machine 1240, and the portion of hardware 1230 executing the virtual machine (whether hardware dedicated to the virtual machine and/or hardware shared by the virtual machine with other virtual machines 1240), forms a separate Virtual Network Element (VNE).

Still in the context of NFV, a Virtual Network Function (VNF) is responsible for handling specific network functions running in one or more virtual machines 1240 on hardware network infrastructure 1230, and corresponds to application 1220 in fig. 12.

In some embodiments, one or more radio units 12200 may be coupled to one or more antennas 12225, each radio unit 12200 including one or more transmitters 12220 and one or more receivers 12210. Radio unit 12200 may communicate directly with hardware node 1230 via one or more appropriate network interfaces and may be used in conjunction with virtual components to provide radio capabilities to virtual nodes, such as radio access nodes or base stations.

In some embodiments, some signaling may be implemented using control system 12230, which control system 12230 may alternatively be used for communication between hardware node 1230 and radio 12200.

Modifications and other embodiments will come to mind to one skilled in the art having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that modifications and other embodiments (e.g., specific forms other than those of the embodiments described above) are intended to be included within the scope of the present disclosure. The described embodiments are merely illustrative and should not be considered restrictive in any way. The scope sought is given by the appended claims, rather than the preceding description, and all variations and equivalents which fall within the range of the claims are intended to be embraced therein. Although specific terms may be employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.

31页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于分发发布-订阅消息的方法和装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类