Method, equipment and system for sharing multicast message load

文档序号:1941455 发布日期:2021-12-07 浏览:18次 中文

阅读说明:本技术 组播报文负载分担的方法、设备和系统 (Method, equipment and system for sharing multicast message load ) 是由 谢经荣 丁成龙 段方红 于 2020-06-05 设计创作,主要内容包括:本申请提供了一种组播报文负载分担的方法,该方法包括:第一网络设备接收第一组播报文;根据组播转发表项确定与所述第一组播报文对应的第一链路组,所述第一链路组包括所述第一网络设备与第二网络设备之间的至少两条并行链路,所述第二网络设备为所述第一网络设备的邻居,所述至少两条并行链路不同;选择第一链路发送所述第一组播报文,所述第一链路为所述至少两条并行链路中的一条链路。本申请提供的技术方案在进行负载分担的多个并行链路中的一个或多个链路发生故障或恢复时,可以缩短组播业务的收敛时间。(The application provides a multicast message load sharing method, which comprises the following steps: the first network equipment receives a first multicast message; determining a first link group corresponding to the first multicast message according to a multicast forwarding table entry, where the first link group includes at least two parallel links between the first network device and a second network device, the second network device is a neighbor of the first network device, and the at least two parallel links are different; and selecting a first link to send the first multicast message, wherein the first link is one of the at least two parallel links. According to the technical scheme provided by the application, when one or more links in a plurality of parallel links for load sharing fail or recover, the convergence time of the multicast service can be shortened.)

1. A method of multicast message load sharing, the method comprising:

the first network equipment receives a first multicast message;

the first network device determines a first link group corresponding to the first multicast message according to a multicast forwarding table entry, where the first link group includes at least two parallel links between the first network device and a second network device, the second network device is a neighbor of the first network device, and the at least two parallel links are different;

and the first network equipment selects a first link to send the first multicast message, wherein the first link is one of the at least two parallel links.

2. The method of claim 1, wherein the at least two parallel links are different, comprising:

and the Internet protocol IP addresses of the corresponding interfaces between the at least two parallel links are different.

3. The method according to claim 1 or 2, wherein the multicast forwarding entry includes a correspondence between a first identifier of the first multicast packet and an identifier of the first link group;

the first network device determines, according to the multicast forwarding table entry, a first link group corresponding to the first multicast message, including:

the first network equipment acquires the first identifier from the first multicast message;

and the first network equipment determines the first link group according to the multicast forwarding table entry and the first identifier.

4. The method according to any of claims 1 to 3, wherein the selecting, by the first network device, the first link to send the first multicast packet comprises:

the first network equipment determines the at least two parallel links according to the identification of the first link group;

and the first network equipment selects the first link from the at least two parallel links to send the first multicast message.

5. The method according to any one of claims 1 to 4, further comprising:

and when the state of the first link is unavailable, the first network equipment selects a second link except the first link from the at least two parallel links, and sends the first multicast message through the second link.

6. The method according to any of claims 1 to 5, wherein before the first network device determines the first link group corresponding to the first multicast message according to a multicast forwarding entry, the method further comprises:

the first network device receives at least two messages sent by the second network device through each link of the at least two parallel links respectively, wherein the message sent by each link comprises the identification ID of the second network device;

the first network device establishes the first link group including the at least two parallel links based on an ID of the second network device.

7. The method according to any one of claims 1 to 6, wherein the selecting, by the first network device, the first link to send the first multicast packet specifically includes:

and the first network equipment selects a first link from the at least two parallel links according to the characteristic information of the first multicast message, and sends the first multicast message through the first link.

8. A method of multicast message load sharing, the method comprising:

a second network device receives a first multicast packet sent by a first network device through a first link, wherein the first network device is a neighbor of the second network device, the first link is one of at least two parallel links between the first network device and the second network device included in the second link group, and the at least two parallel links are different;

the second network device determines that the first multicast message forwards an RPF check through a reverse path for one link in the second link group according to the first link;

and the second network equipment forwards the first multicast message.

9. The method of claim 8, wherein the at least two parallel links are different, comprising:

and the Internet protocol IP addresses of the interfaces corresponding to the at least two parallel links are different.

10. The method according to claim 8 or 9, wherein the second network device determines, for one link in the second link group according to the first link, that the first multicast packet passes a Reverse Path Forwarding (RPF) check, and includes:

the second network equipment determines the second link group corresponding to the first multicast message according to the multicast forwarding table entry;

the second network device determines that the first multicast packet passes the RPF check based on the first link being one link in a second link group.

11. The method according to claim 10, wherein the multicast forwarding entry includes a correspondence between a first identifier of the first multicast packet and an identifier of the second link group,

the second network device determines the second link group corresponding to the first multicast packet according to a multicast forwarding table entry, including:

the second network equipment acquires a first identifier from the first multicast message;

and the second network equipment determines the second link group corresponding to the first multicast message according to the first identifier and the multicast forwarding table entry.

12. The method according to claim 10 or 11, wherein the multicast forwarding entry further includes an identifier of the second link group and a correspondence between at least two parallel links in the second link group, and the method further includes:

and the second network device determines that the first link is one link in the second link group according to the identifier of the second link group and the multicast forwarding table entry.

13. The method according to any one of claims 8 to 12, wherein before the second network device determines that the first multicast message passes a Reverse Path Forwarding (RPF) check for one link in a second link group according to the first link, the method further comprises:

the second network device receives at least two messages sent by the first network device through each link of the at least two parallel links, wherein the message sent by each link comprises the identification ID of the first network device;

the second network device establishes the second link group including the at least two parallel links based on the ID of the first network device and a correspondence between the IDs of the second link group and the first network device.

14. A first network device, comprising:

a receiving module, configured to receive a first multicast packet;

a determining module, configured to determine, according to a multicast forwarding table entry, a first link group corresponding to the first multicast message, where the first link group includes at least two parallel links between the first network device and a second network device, the at least two parallel links are different, and the second network device is a neighbor of the first network device;

and the selection module is used for selecting a first link to send the first multicast message, wherein the first link is one of the at least two parallel links.

15. The first network device of claim 14, wherein the at least two parallel links are different, comprising:

and the Internet protocol IP addresses of the corresponding interfaces between the at least two parallel links are different.

16. The first network device according to claim 14 or 15, wherein the multicast forwarding entry includes a correspondence between a first identifier of the first multicast packet and an identifier of the first link group;

the determining module is specifically configured to:

acquiring the first identifier from the first multicast message through an acquisition module;

and determining the first link group according to the multicast forwarding table entry and the first identifier.

17. The first network device according to any one of claims 14 to 16, wherein the selection module is specifically configured to:

determining the at least two parallel links according to the identification of the first link group;

and selecting the first link from the at least two parallel links to send the first multicast message.

18. The first network device of any of claims 14-17, wherein the selection module is further configured to:

and when the state of the first link is unavailable, selecting a second link except the first link from the at least two parallel links, and sending the first multicast message through the second link.

19. The first network device of any of claims 14-18, wherein the receiving module is further configured to:

receiving at least two messages sent by the second network equipment through each link of the at least two parallel links respectively, wherein the message sent by each link comprises the identification ID of the second network equipment;

further comprising:

an establishing module configured to establish the first link group including the at least two parallel links based on the ID of the second network device.

20. The first network device of any one of claims 14 to 19, wherein the selection module is specifically configured to:

and selecting a first link from the at least two parallel links according to the characteristic information of the first multicast message, and sending the first multicast message through the first link.

21. A second network device, comprising:

a receiving module, configured to receive a first multicast packet sent by a first network device through a first link, where the first network device is a neighbor of the second network device, the first link is one of at least two parallel links between the first network device and the second network device included in the second link group, and the at least two parallel links are different;

a determining module, configured to determine, according to that the first link is one link in a second link group, that the first multicast packet forwards an RPF check through a reverse path;

and the sending module is used for forwarding the first multicast message.

22. The second network device of claim 21, wherein the at least two parallel links are different, comprising:

and the Internet protocol IP addresses of the interfaces corresponding to the at least two parallel links are different.

23. The second network device of claim 21 or 22, wherein the determining module is specifically configured to:

determining the second link group corresponding to the first multicast message according to the multicast forwarding table entry;

and determining that the first multicast message passes the RPF check based on the first link being one link in a second link group.

24. The second network device of claim 23, wherein the multicast forwarding entry includes a correspondence between a first identifier of the first multicast packet and an identifier of the second link group,

the determining module is specifically configured to:

acquiring a first identifier from the first multicast message;

and determining the second link group corresponding to the first multicast message according to the first identifier and the multicast forwarding table entry.

25. The second network device of claim 23 or 24, wherein the multicast forwarding entry further includes an identifier of the second link group and a correspondence between at least two parallel links in the second link group,

the determining module is specifically configured to: and determining that the first link is one link in the second link group according to the identifier of the second link group and the multicast forwarding table entry.

26. The second network device of any of claims 21-25, wherein the receiving module is further configured to:

receiving at least two messages sent by the first network equipment through each link of the at least two parallel links respectively, wherein the message sent by each link comprises the identification ID of the first network equipment;

further comprising:

an establishing module configured to establish the second link group including the at least two parallel links based on the ID of the first network device.

27. A first network device, comprising: a processor and a memory, the memory for storing a program, the processor for calling and running the program from the memory to perform the method of any of claims 1 to 7.

28. A second network device, comprising: a processor and a memory, the memory for storing a program, the processor for invoking and running the program from the memory to perform the method of any one of claims 8 to 13.

29. A system for multicast message load sharing, comprising: a first network device as claimed in any of claims 14 to 20 and a second network device as claimed in any of claims 21 to 26.

30. A computer-readable storage medium, comprising a computer program which, when run on a computer, causes the computer to perform the method of any one of claims 1 to 7.

31. A computer-readable storage medium, comprising a computer program which, when run on a computer, causes the computer to perform the method of any of claims 8 to 13.

Technical Field

The present application relates to the field of network communications, and in particular, to a method, a first network device, a second network device, and a system for sharing multicast packet load.

Background

The Internet Protocol (IP) multicast technology realizes the high-efficiency data transmission from point to multipoint in the IP network, and can effectively save the network bandwidth and reduce the network load.

In a scenario that a plurality of parallel links exist between two nodes that forward a multicast packet and the plurality of parallel links are not bundled into a Link Aggregation Group (LAG), in a related technical solution, different multicast source groups (S, G) are sent to different links, thereby realizing load sharing of multicast traffic under the plurality of parallel links. However, if one or more of the multiple parallel links performing load sharing fails or recovers, the convergence time of the multicast service in the above related technical solution is long.

Therefore, when one or more links among the multiple parallel links performing load sharing fail or recover, how to shorten the convergence time of the multicast service becomes a technical problem which needs to be solved at present.

Disclosure of Invention

The application provides a method for multicast message load sharing and a first network device, which can shorten the convergence time of multicast service when one or more of a plurality of parallel links for load sharing fails or recovers.

In a first aspect, a method for multicast message load sharing is provided, including: the first network equipment receives a first multicast message; the first network device determines a first link group corresponding to the first multicast message according to a multicast forwarding table entry, where the first link group includes at least two parallel links between the first network device and a second network device, the at least two parallel links are different, and the second network device is a neighbor of the first network device; and the first network equipment selects a first link to send the first multicast message, wherein the first link is one of the at least two parallel links.

It should be noted that in the present application, that a link of a network device is in an available state (up) may be understood as that the link of the network device is normal, and may perform packet forwarding. The link of the network device is in a down state (down), which can be understood as a link failure of the network device, and the message forwarding cannot be performed.

The first link and the second link of the first network device may be two members of a first link group, and the links in the first link group may be both in an available state or a partially unavailable state. The number of links in the first link group may be an integer greater than 1, which is not specifically limited in this application.

It should be understood that in this application, "member" may refer to an interface, including an ethernet port, that makes the connection of the network devices in the first link group with the load sharing link.

In this application, the at least two parallel links included in the first link group are different, and may be understood as two or more links between connected nodes are different from a Link Aggregation Group (LAG). In a possible implementation manner, the physical interfaces corresponding to at least two parallel links in the first link group are different. In another possible implementation, Internet Protocol (IP) addresses on at least two parallel link interfaces are different. In another possible implementation manner, the at least two parallel links have different corresponding physical interfaces and different corresponding IP addresses. In another possible implementation manner, the link types corresponding to at least two parallel links in the first link group are different. In another possible implementation manner, physical interfaces corresponding to at least two parallel links in the first link group are the same, but Virtual Local Area Networks (VLANs) configured on the physical interfaces are different, that is, at least two parallel links between nodes may be regarded as at least two different logical links.

It is to be understood that the above may each be different or may be different as long as they are. The difference between the IP addresses of the at least two parallel link interfaces may be that the IP addresses of the at least two parallel link interfaces are different, or that the IP addresses of the at least two parallel link interfaces are different.

In the above technical solution, in a scenario where a plurality of different parallel links are used to perform load sharing on a multicast packet sent between network devices, if at least one link included in a link group performing load sharing fails or recovers, only a member link state in the link group needs to be refreshed, and a multicast forwarding table entry recording a correspondence between a link and multicast traffic in the prior art does not need to be refreshed. Because a large amount of multicast traffic may exist on a small number of parallel links, there are many items to be refreshed for refreshing the correspondence between the multicast traffic and the links. Fewer entries need to be refreshed to refresh the member link states within the link set. Therefore, only refreshing the link state of the member in the link group can shorten the convergence time of the multicast service.

In a possible implementation manner, the multicast forwarding entry includes a correspondence between a first identifier of the first multicast packet and an identifier of the first link group; the first network equipment acquires the first identifier from the first multicast message; and the first network equipment determines the first link group according to the multicast forwarding table entry and the first identifier.

It should be understood that the first identifier of the first multicast packet may also be referred to as a multicast flow identifier of the first multicast packet.

The identification of the first link group may be the name of a string or may be the link group ID represented by a link group type + integer value.

The member table corresponding to the identifier of the first link group may include the identifier of the first link group and at least two link identifiers corresponding to at least two parallel links in the first link group.

Further, the multicast forwarding table entry further includes a correspondence between an identifier of the second link group and identifiers of at least two parallel links in the second link group.

In another possible implementation manner, the first network device determines the at least two parallel links according to the identifier of the first link group; and the first network equipment selects the first link from the at least two parallel links to send the first multicast message according to the corresponding relation between the identifier of the second link group and the identifiers of the at least two parallel links in the second link group.

In another possible implementation manner, the method further includes: and when the state of the first link is unavailable, the first network equipment selects a second link except the first link from the at least two parallel links, and sends the first multicast message through the second link.

In another possible implementation manner, before the first network device determines, according to a multicast forwarding entry, a first link group corresponding to the first multicast message, the method further includes: the first network device receives at least two messages sent by the second network device through each link of the at least two parallel links respectively, wherein the message sent by each link comprises the identification ID of the second network device; the first network device determines that the ID of the second network device included in each of the at least two messages is the same; the first network device establishes the first link group including the at least two parallel links based on the ID of the second network device and the correspondence between the IDs of the first link group and the second network device.

In another possible implementation manner, the first network device selects a first link from the at least two parallel links according to the feature information of the first multicast packet, and sends the first multicast packet through the first link.

It should be understood that, the present application does not specifically limit the feature information of the first multicast packet, and may include one or more of the following: the source address information of the first multicast packet, the destination address information of the first multicast packet, the source address and the destination address information of the first multicast packet, the hash result information corresponding to the source address of the first multicast packet, the hash result information corresponding to the destination address information of the first multicast packet, and the hash result information corresponding to the source address and the destination address information of the first multicast packet.

In another possible implementation manner, the first network device obtains a first link in the first link group by performing modulo operation on the feature information of the first multicast packet according to the number of links in the first link group whose states are available, and sends the first multicast packet through the first link.

In a second aspect, another method for sharing multicast packet load is provided, including: a first network device receives a first multicast adding message through a first link, wherein the first multicast adding message comprises a first identifier of a first multicast message, and the first link is one link in a first link group; the first network device establishes a multicast forwarding table entry, where the multicast forwarding table entry includes a corresponding relationship between the first identifier and an identifier of a first link group, and the first link group includes at least two parallel links between the first network device and the second network device, where the at least two parallel links are different.

In one possible implementation, the method further includes: the first network equipment receives a second multicast adding message through a second link, wherein the second multicast adding message comprises a second identifier of a second multicast message, and the second link is one link in a second link group; the first network device establishes the multicast forwarding table entry, and the multicast forwarding table entry further includes a corresponding relationship between a second identifier and an identifier of a second link group.

In another possible implementation manner, the method further includes: the first network equipment receives the first multicast message; the first network equipment determines a first link group corresponding to the first multicast message according to the multicast forwarding table entry; and the first network equipment selects a second link to send the first multicast message, wherein the second link is one of the at least two parallel links.

In this application, the at least two parallel links included in the first link group are different, and it may be understood that two or more links between the connected nodes are not bundled into a Link Aggregation Group (LAG). In a possible implementation manner, the physical interfaces corresponding to at least two parallel links in the first link group are different, and Internet Protocol (IP) addresses on the at least two parallel links are different. In another possible implementation manner, the link types corresponding to at least two parallel links in the first link group are different. In another possible implementation manner, physical interfaces corresponding to at least two parallel links in the first link group are the same, but Virtual Local Area Networks (VLANs) configured on the physical interfaces are different, that is, at least two parallel links between nodes may be regarded as at least two different logical links.

It is to be understood that the above may each be different or may be different as long as they are. The difference between the IP addresses of the at least two parallel link interfaces may be that the IP addresses of the at least two parallel link interfaces are different, or that the IP addresses of the at least two parallel link interfaces are different.

In another possible implementation manner, the network device obtains the first identifier from the first multicast packet; and the first network equipment determines the first link group according to the multicast forwarding table entry and the first identifier.

In another possible implementation manner, the first network device determines the at least two parallel links according to the identifier of the first link group; and the first network equipment selects the first link from the at least two parallel links to send the first multicast message.

In another possible implementation manner, the method further includes: and when the state of the first link is unavailable, the first network equipment selects a second link except the first link from the at least two parallel links, and sends the first multicast message through the second link.

In another possible implementation manner, before the first network device determines, according to the multicast forwarding entry, a first link group corresponding to the first multicast message, the method further includes: the first network device receives at least two messages sent by the second network device through each link of the at least two parallel links respectively, wherein the message sent by each link comprises the identification ID of the second network device; the first network device establishes the first link group including the at least two parallel links based on the ID of the second network device and the correspondence between the IDs of the first link group and the second network device.

Specifically, the two messages may be Protocol Independent Multicast (PIM) hello messages.

In another possible implementation manner, the first network device selects the second link from the at least two parallel links according to the feature information of the first multicast packet, and sends the first multicast packet through the second link.

It should be understood that, the present application does not specifically limit the feature information of the first multicast packet, and may include one or more of the following: the source address information of the first multicast packet, the destination address information of the first multicast packet, the source address and the destination address information of the first multicast packet, the hash result information corresponding to the source address of the first multicast packet, the hash result information corresponding to the destination address information of the first multicast packet, and the hash result information corresponding to the source address and the destination address information of the first multicast packet.

In another possible implementation manner, the first network device obtains a first link in the first link group by performing modulo operation on the feature information of the first multicast packet according to the number of links in the first link group whose states are available, and sends the first multicast packet through the first link.

The beneficial effects of the second aspect and any one of the possible implementation manners of the second aspect correspond to the beneficial effects of the first aspect and any one of the possible implementation manners of the first aspect, and therefore, the detailed description is omitted here.

In a third aspect, a method for multicast message load sharing is provided, including: a second network device receives a first multicast packet sent by a first network device through a first link, wherein the first network device is a neighbor of the second network device, the first link is one link in a second link group, the second link group comprises at least two parallel links between the first network device and the second network device, and the at least two parallel links are different; the second network device determines that the first multicast message forwards an RPF check through a reverse path for one link in the second link group according to the first link; and the second network equipment forwards the first multicast message.

In a possible implementation manner, the second network device determines the second link group corresponding to the first multicast message according to a multicast forwarding table entry; and the second network equipment determines that the first link is one link in a second link group, and determines that the first multicast message passes RPF check.

In another possible implementation manner, the multicast forwarding entry includes a correspondence between a first identifier of the first multicast packet and an identifier of the second link group, and the second network device obtains the first identifier from the first multicast packet; and the second network equipment determines the second link group according to the first identifier and the multicast forwarding table entry.

In another possible implementation manner, the multicast forwarding entry further includes a correspondence between an identifier of the second link group and identifiers of at least two parallel links in the second link group, and the second network device determines that the first link is one link in the second link group according to the identifier of the second link group and the multicast forwarding entry.

In another possible implementation manner, before the second network device determines that the first multicast packet passes a reverse path forwarding RPF check for one link in a second link group according to the first link, the method further includes: the second network device receives at least two messages sent by the first network device through each link of the at least two parallel links, wherein the message sent by each link comprises the identification ID of the first network device; the second network device determines that the ID of the first network device included in each of the at least two messages is the same; the second network device establishes the second link group including the at least two parallel links based on the ID of the first network device and a correspondence between the IDs of the second link group and the first network device.

Specifically, the two messages may be Protocol Independent Multicast (PIM) hello messages.

In a fourth aspect, a first network device is provided, which includes: the device comprises a receiving module, a determining module and a selecting module.

The receiving module is used for receiving the first multicast message;

the determining module is configured to determine, according to a multicast forwarding table entry, a first link group corresponding to the first multicast message, where the first link group includes at least two parallel links between the first network device and a second network device, the at least two parallel links are different, and the second network device is a neighbor of the first network device;

the selection module is configured to select a first link to send the first multicast packet, where the first link is one of the at least two parallel links.

In a possible implementation manner, the multicast forwarding entry includes a correspondence between a first identifier of the first multicast packet and an identifier of the first link group;

the determining module is specifically configured to obtain the first identifier from the first multicast packet through an obtaining module; and determining the first link group according to the multicast forwarding table entry and the first identifier.

In another possible implementation manner, the selection module is specifically configured to determine the at least two parallel links according to an identifier of the first link group; and selecting the first link from the at least two parallel links to send the first multicast message.

In another possible implementation manner, the selecting module is further configured to select a second link other than the first link from the at least two parallel links when the state of the first link is unavailable, and send the first multicast packet through the second link.

In another possible implementation manner, the receiving module is further configured to receive at least two messages sent by the second network device through each of the at least two parallel links, where the message sent by each link includes an identification ID of the second network device; the determining module is further configured to determine that the IDs of the second network devices included in each of the at least two messages are the same;

the first network device further includes:

the establishing module is configured to establish the first link group including the at least two parallel links based on the ID of the second network device and a correspondence between the IDs of the first link group and the second network device.

In another possible implementation manner, the selection module is specifically configured to: and selecting a first link from the at least two parallel links according to the characteristic information of the first multicast message, and sending the first multicast message through the first link.

In a fifth aspect, another first network device is provided and includes a receiving module and an establishing module.

The receiving module is used for receiving a first multicast adding message through a first link, wherein the first multicast adding message comprises a first identifier of a first multicast message, and the first link is one link in a first link group;

the establishing module is configured to establish the multicast forwarding entry, where the multicast forwarding entry includes a corresponding relationship between the first identifier and an identifier of a first link group, and the first link group includes at least two parallel links between the first network device and the second network device, where the at least two parallel links are different.

In one possible implementation manner, the receiving module is further configured to: receiving a second multicast join message through a second link, wherein the second multicast join message comprises a second identifier of a second multicast message, and the second link is a link in a second link group;

the establishing module is further configured to establish the multicast forwarding table entry, where the multicast forwarding table entry further includes a correspondence between a second identifier and an identifier of a second link group.

In another possible implementation manner, the receiving module is further configured to receive the first multicast packet;

the first network device further includes a determining module, configured to determine, according to the multicast forwarding entry, a first link group corresponding to the first multicast message;

the selection module is configured to select a second link to send the first multicast packet, where the second link is one of the at least two parallel links.

In another possible implementation manner, the determining module is specifically configured to obtain the first identifier from the first multicast packet through an obtaining module; and determining the first link group according to the multicast forwarding table entry and the first identifier.

In another possible implementation manner, the determining module is specifically configured to determine the at least two parallel links according to the identifier of the first link group; and selecting the first link from the at least two parallel links to send the first multicast message.

In another possible implementation manner, the selecting module is further configured to select a third link, other than the second link, from the at least two parallel links when the state of the second link is unavailable, and send the first multicast packet through the third link.

In another possible implementation manner, the receiving module is further configured to receive at least two messages sent by the second network device through each of the at least two parallel links, where the message sent by each link includes an identification ID of the second network device;

the determining module is further configured to determine that the IDs of the second network devices included in each of the at least two messages are the same;

the establishing module is further configured to establish the first link group including the at least two parallel links based on the ID of the second network device and a correspondence between the IDs of the first link group and the second network device.

In another possible implementation manner, the selection module is specifically configured to select the second link from the at least two parallel links according to the feature information of the first multicast packet, and send the first multicast packet through the second link.

In a sixth aspect, a second network device is provided and includes a receiving module, a determining module, and a sending module.

The receiving module is configured to receive a first multicast packet sent by a first network device through a first link, where the first network device is a neighbor of a second network device, the first link is a link in a second link group, the second link group includes at least two parallel links between the first network device and the second network device, and the at least two parallel links are different; the determining module is used for determining that the first multicast message forwards the RPF check through a reverse path according to the fact that the first link is one link in a second link group; and the sending module is used for forwarding the first multicast message.

In a possible implementation manner, the determining module is specifically configured to determine the second link group corresponding to the first multicast packet according to a multicast forwarding table entry; and determining that the first link is one link in a second link group, and determining that the first multicast message passes RPF check.

In another possible implementation manner, the multicast forwarding entry includes a correspondence between a first identifier of the first multicast packet and an identifier of the second link group,

the determining module is specifically configured to obtain a first identifier from the first multicast packet; and determining the second link group according to the first identifier and the multicast forwarding table entry.

In another possible implementation manner, the multicast forwarding entry further includes a correspondence between an identifier of the second link group and identifiers of at least two parallel links in the second link group,

the determining module is specifically configured to determine that the first link is a link in the second link group according to the identifier of the second link group and the multicast forwarding entry.

In another possible implementation manner, the receiving module is further configured to receive at least two messages sent by the first network device through each of the at least two parallel links, where the message sent by each link includes an identification ID of the first network device; the determining module is further configured to determine that the IDs of the first network devices included in each of the at least two messages are the same;

the system further comprises an establishing module, configured to establish the second link group including the at least two parallel links based on the ID of the first network device and a correspondence between the IDs of the second link group and the first network device.

A seventh aspect provides a first network device, where the first network device has a function of implementing the first network device behavior in the first aspect or any possible implementation manner of the first aspect, or any possible implementation manner of the second aspect or the second aspect. The functions can be realized based on hardware, and corresponding software can be executed based on hardware. The hardware or software includes one or more modules corresponding to the above-described functions.

In one possible design, the first network device includes a processor and an interface in a structure, and the processor is configured to support the first network device to perform corresponding functions in the above method. The interface is used for supporting the first network device to receive the first multicast message or supporting the first multicast message to be sent. The first network device may also include a memory, coupled to the processor, that retains program instructions and data necessary for the first network device.

In another possible design, the first network device includes: a processor, a transmitter, a receiver, a random access memory, a read only memory, and a bus. The processor is coupled to the transmitter, the receiver, the random access memory and the read only memory through the bus respectively. When the first network equipment needs to be operated, the first network equipment is guided to enter a normal operation state by starting a basic input/output system solidified in a read-only memory or a bootloader guiding system in an embedded system. After the first network device enters the normal operation state, the application program and the operating system are executed in the random access memory, so that the processor executes the first aspect or any possible implementation manner of the first aspect or the method in the second aspect or any possible implementation manner of the second aspect.

In an eighth aspect, a first network device is provided, the first network device comprising: the main control board and the interface board, further, can also include the exchange network board. The first network device is configured to perform the first aspect or any possible implementation manner of the first aspect or the method in the second aspect or any possible implementation manner of the second aspect. In particular, the first network device comprises means for performing the first aspect or any possible implementation of the first aspect or the method in the second aspect or any possible implementation of the second aspect.

In a ninth aspect, a first network device is provided that includes a controller and a first forwarding sub-device. The first forwarding sub-apparatus comprises: the interface board further can also comprise a switching network board. The first forwarding sub-device is configured to execute the function of the interface board in the eighth aspect, and further, may also execute the function of the switching network board in the eighth aspect. The controller includes a receiver, a processor, a transmitter, a random access memory, a read only memory, and a bus. The processor is coupled to the receiver, the transmitter, the random access memory and the read only memory through the bus respectively. When the controller needs to be operated, the basic input/output system solidified in the read-only memory or the bootloader bootstrap system in the embedded system is started, and the bootstrap controller enters a normal operation state. After the controller enters the normal operation state, the application program and the operating system are operated in the random access memory, so that the processor executes the functions of the main control board in the eighth aspect.

It will be appreciated that in actual practice, the first network device may contain any number of interfaces, processors, or memories.

A tenth aspect provides a second network device, where the first network device has a function of implementing the behavior of the first network device in the third aspect or any possible implementation manner of the third aspect. The functions can be realized based on hardware, and corresponding software can be executed based on hardware. The hardware or software includes one or more modules corresponding to the above-described functions.

In one possible design, the first network device includes a processor and an interface in its structure, and the processor is configured to support the second network device to perform the corresponding functions in the above method. The interface is used for supporting the second network device to receive the multicast message sent by the first network device through the first link. The second network device may also include a memory, coupled to the processor, that stores necessary program instructions and data for the second network device.

In another possible design, the second network device includes: a processor, a transmitter, a receiver, a random access memory, a read only memory, and a bus. The processor is coupled to the transmitter, the receiver, the random access memory and the read only memory through the bus respectively. When the second network device needs to be operated, the second network device is guided to enter a normal operation state by starting a basic input/output system solidified in a read-only memory or a bootloader guiding system in an embedded system. After the second network device enters the normal operation state, the application program and the operating system are executed in the random access memory, so that the processor executes the method in the third aspect or any possible implementation manner of the third aspect.

In an eleventh aspect, there is provided a second network device, comprising: the main control board and the interface board, further, can also include the exchange network board. The second network device is configured to perform the third aspect or the method in any possible implementation manner of the third aspect. In particular, the second network device comprises means for performing the method of the third aspect or any possible implementation manner of the third aspect.

In a twelfth aspect, a second network device is provided that includes a controller and a first forwarding sub-device. The first forwarding sub-apparatus comprises: the interface board further can also comprise a switching network board. The first forwarding sub-device is configured to perform the function of the interface board in the eleventh aspect, and further, may also perform the function of the switch board in the eleventh aspect. The controller includes a receiver, a processor, a transmitter, a random access memory, a read only memory, and a bus. The processor is coupled to the receiver, the transmitter, the random access memory and the read only memory through the bus respectively. When the controller needs to be operated, the basic input/output system solidified in the read-only memory or the bootloader bootstrap system in the embedded system is started, and the bootstrap controller enters a normal operation state. After the controller enters the normal operation state, the application program and the operating system are operated in the random access memory, so that the processor performs the functions of the main control board in the eleventh aspect.

It will be appreciated that in actual practice, the second network device may contain any number of interfaces, processors or memories.

In a thirteenth aspect, there is provided a computer program product comprising: computer program code for causing a computer to perform the method of the first aspect or any of the possible implementations of the second aspect when the computer program code runs on a computer.

In a fourteenth aspect, there is provided a computer program product comprising: computer program code which, when run on a computer, causes the computer to perform the method of the third aspect or any possible implementation of the third aspect.

A fifteenth aspect provides a computer-readable medium having program code stored thereon, which, when run on a computer, causes the computer to perform the method of the first aspect or any of the possible implementations of the second aspect or the second aspect. These computer-readable memories include, but are not limited to, one or more of the following: read-only memory (ROM), Programmable ROM (PROM), Erasable PROM (EPROM), Flash memory, Electrically EPROM (EEPROM), and hard drive (hard drive).

A sixteenth aspect provides a computer-readable medium having program code stored thereon, which, when run on a computer, causes the computer to perform the method of the third aspect or any possible implementation of the third aspect. These computer-readable memories include, but are not limited to, one or more of the following: read-only memory (ROM), Programmable ROM (PROM), Erasable PROM (EPROM), Flash memory, Electrically EPROM (EEPROM), and hard drive (hard drive).

A seventeenth aspect provides a chip, where the chip includes a processor and a data interface, where the processor reads instructions stored in a memory through the data interface to execute the first aspect or any one of the possible implementations of the first aspect or the method in any one of the possible implementations of the second aspect or the second aspect. In a specific implementation process, the chip may be implemented in the form of a Central Processing Unit (CPU), a Micro Controller Unit (MCU), a Micro Processing Unit (MPU), a Digital Signal Processor (DSP), a system on chip (SoC), an application-specific integrated circuit (ASIC), a Field Programmable Gate Array (FPGA), or a Programmable Logic Device (PLD).

In an eighteenth aspect, a chip is provided, where the chip includes a processor and a data interface, where the processor reads instructions stored in a memory through the data interface to perform the method of the third aspect or any one of the possible implementation manners of the third aspect. In a specific implementation process, the chip may be implemented in the form of a Central Processing Unit (CPU), a Micro Controller Unit (MCU), a Micro Processing Unit (MPU), a Digital Signal Processor (DSP), a system on chip (SoC), an application-specific integrated circuit (ASIC), a Field Programmable Gate Array (FPGA), or a Programmable Logic Device (PLD).

In a nineteenth aspect, a system is provided that includes the first network device and the second network device described above.

Drawings

Fig. 1 is a schematic diagram of one possible application scenario applicable to the present application.

Fig. 2 is a schematic flowchart of a method for multicast message load sharing according to an embodiment of the present application.

Fig. 3 is a schematic flowchart of another method for sharing multicast packet load according to an embodiment of the present application.

Fig. 4 is a schematic structural diagram of a first network device 400 according to an embodiment of the present application.

Fig. 5 is a schematic structural diagram of another first network device 500 provided in the embodiment of the present application.

Fig. 6 is a schematic hardware configuration diagram of the first network device 2000 according to an embodiment of the present application.

Fig. 7 is a schematic hardware configuration diagram of another first network device 2100 according to an embodiment of the present application.

Fig. 8 is a schematic structural diagram of another second network device 800 according to an embodiment of the present application.

Fig. 9 is a schematic hardware structure diagram of a first network device 2200 according to an embodiment of the present application.

Fig. 10 is a schematic hardware configuration diagram of another first network device 2400 according to an embodiment of the present application.

Detailed Description

The technical solution in the present application will be described below with reference to the accompanying drawings.

Multicast (multicast) is a data transmission method for transmitting data to a plurality of receivers on a Transmission Control Protocol (TCP)/Internet Protocol (IP) network in an efficient manner at the same time by using one multicast address. The multicast source sends a multicast stream to the multicast group members in the multicast group through the link in the network, and the multicast group members in the multicast group can all receive the multicast stream. The multicast transmission mode realizes point-to-multipoint data connection between a multicast source and multicast group members. Since the multicast stream only needs to be delivered once per network link, and the multicast is replicated only when a branch occurs on the link. Therefore, the multicast transmission mode improves the data transmission efficiency and reduces the possibility of congestion of the backbone network.

The Internet Protocol (IP) multicast technology realizes the high-efficiency data transmission from point to multipoint in the IP network, and can effectively save the network bandwidth and reduce the network load. Therefore, the method and the device have wide application in real-time data transmission, multimedia conferences, data copying, Internet Protocol Television (IPTV), games, simulation and other aspects.

Referring to fig. 1, an application scenario applicable to the embodiment of the present application is described in detail below

Fig. 1 is a schematic diagram of a possible multicast scenario applicable to the embodiment of the present application. Referring to fig. 1, a multicast Receiver (RCV), a router R1, a router R2, a router R3, and a multicast Source (SRC) may be included in the scenario.

It should be understood that there may be a plurality of routers (R) connected to the multicast receiver in the embodiment of the present application, and for convenience of description, three routers, for example, router R1/router R2/router R3, are illustrated in fig. 1 as an example.

The multicast receiver may send a multicast join message to router R1 connected to it, router R1 in turn sends a multicast join message to router R2, and router R2 in turn sends a multicast join message to router R3. After receiving multicast data traffic from the multicast source, the router R3 sends the multicast data traffic to the multicast receivers along the router R2 and the router R1.

It should be understood that, in the embodiment of the present application, the multicast join message sent by the multicast receiver is not specifically limited. The multicast service may be an Internet Group Management Protocol (IGMP) message, or may also be a Multicast List Discovery (MLD) message, or may also be a Protocol Independent Multicast (PIM) message.

As shown in fig. 1, the IGMP protocol or MLD protocol may run between router R1 and the multicast receivers. The network between router R1/router R2/router R3 may be the protocol PIM, i.e. a PIM protocol may run between router R1/router R2/router R3. The PIM protocol may be run on the interface of router R3 to the multicast source.

It should be noted that the PIM protocol does not need to be run on the interface of the multicast source connected to the router R3.

In the embodiment of the present application, a PIM protocol may run between the router R1 and the router R2 and between the router R3, and an interface enabling the PIM protocol on the router R1 and the router R2 and the router R3 may send an external PIM hello packet. For example, the link interface of the router R1 enables the PIM protocol, and the interface of the router R1 may send a PIM hello packet to the router R2/router R3. As another example, the link interface of the router R2 enables the PIM protocol, and the interface of the router R2 may send a PIM hello packet to the router R1/router R3.

It should be noted that, in this embodiment of the present application, when a link interface of a device is in an available state (up), it may be understood that a link of the device is normal, and a message may be forwarded. If the link interface of the device is in the unavailable state (down), the link failure of the device can be understood, and the message forwarding cannot be carried out.

Referring to fig. 1, there are situations in the network where there are multiple parallel links between nodes, e.g., there are at least two parallel links between router R1 and router R2. For convenience of description, three parallel links (link 1, link 2, and link 3) between the router R1 and the router R2 are illustrated in fig. 1.

It should be understood that in the embodiment of the present application, the parallel link may be two or more links between the connected nodes, and the two or more links are different. And, the two or more links are not bundled into a Link Aggregation Group (LAG).

In a possible implementation manner, corresponding physical interfaces between at least two parallel links in the first link group are different, and Internet Protocol (IP) addresses on the at least two parallel link interfaces are different. In another possible implementation manner, the link types corresponding to at least two parallel links in the first link group are different. In another possible implementation manner, physical interfaces corresponding to at least two parallel links in the first link group are the same, but Virtual Local Area Networks (VLANs) configured on the physical interfaces are different, that is, at least two parallel links between nodes may be regarded as at least two different logical links.

It is also to be understood that the above may each be different or may be different so long as they are. The difference between the IP addresses of the at least two parallel link interfaces may be that the IP addresses of the at least two parallel link interfaces are different, or that the IP addresses of the at least two parallel link interfaces are different.

In the related technical solution, different multicast source groups (S, G) are sent to different links, thereby realizing load sharing of multicast traffic under multiple parallel links. Specifically, taking the scenario shown in fig. 1 as an example, three parallel links (link 1, link 2, and link 3) are provided between router R1 and router R2, assuming that router R1 receives (S1, G1 to G30) 30 multicast group joins, when router R1 sends a PIM multicast join message to router R2, the multicast join message corresponding to G1 to G10 is sent to router R2 through link 1 according to an SG multicast forwarding table, the multicast join message corresponding to G11 to G20 is sent to router R2 through link 2, and the multicast join message corresponding to G21 to G30 is sent to router R2 through link 3. Correspondingly, after receiving the multicast traffic, the router R2 sends the multicast traffic of G1-G10 to R1 along the link 1, sends the multicast traffic of G11-G20 to the router R1 along the link 2, and sends the multicast traffic of G21-G30 to the router R1 along the link 3 according to the SG multicast forwarding table entry, thereby realizing convergence of multicast services.

It should be appreciated that the convergence of the multicast traffic may be the reception of multicast traffic corresponding to the multicast source group from the link.

In the above related technical solution, if part of links in multiple parallel links fails, the router R1 needs to update the SG multicast forwarding table entry and send a multicast join message to the new link. For example, when the link 1 fails, the router R1 needs to resend the multicast join message corresponding to G1 to G10 to the router R2 along the link 2, so the router R1 needs to update the SG multicast forwarding entry and resend the multicast join message corresponding to G1 to G10 to the link 2, and the multicast join messages of G1 to G10 and G11 to G20 can both be sent to the router R2 along the link 2, thereby realizing that the multicast traffic of G1 to G10 can be sent to the router R1 from the router R2 along the link 2. In the related technical solution, for the case of failure or recovery of part of links in multiple parallel links, because the SG multicast forwarding table entry needs to be updated and the multicast join message needs to be sent to the new link, when the number of multicast source groups is large, the convergence time of the multicast service is long.

The method for sharing multicast packet load provided by the embodiment of the application can shorten the convergence time of the multicast service under the condition that part of links in a plurality of parallel links are failed or recovered.

Fig. 2 is a schematic flowchart of a method for multicast message load sharing according to an embodiment of the present application. As shown in FIG. 2, the method may include steps 210 and 230, and the steps 210 and 230 are described in detail below.

Step 210: the first network equipment receives the first multicast message.

For example, the first network device in the embodiment of the present application may correspond to the router R2 in fig. 1. The router R2 may receive a first multicast packet sent by a multicast source.

Step 220: and the first network equipment determines a first link group corresponding to the first multicast message according to the multicast forwarding table entry.

In this embodiment, the first link group includes at least two parallel links between the first network device and a neighbor of the first network device, and the at least two parallel links are different. For a detailed description of the parallel link, please refer to the above description, which is not repeated herein.

It should be understood that, taking the scenario shown in fig. 1 as an example, the neighbors of the first network device may correspond to router R1 in fig. 1. At least two parallel links between the router R1 and the router R2 may be included in the first link group, and fig. 1 illustrates an example in which three parallel links are included between the router R1 and the router R2.

Optionally, in some embodiments, the multicast forwarding entry may include a correspondence between a first identifier of the first multicast packet and an identifier of the first link group. The first network device may obtain the first identifier from the first multicast packet, and determine the first link group according to the first representation and the multicast forwarding entry.

The identifier of the first link group may be various, and this is not specifically limited in this embodiment of the application. In a possible implementation manner, the identifier of the first link group may be a name of a character string, where the name of the character string indicates that the outgoing interface or the incoming interface of the message is a link group. In another possible implementation manner, the identifier of the first link group may be a link group type + an integer value, where the integer value represents an identifier, and the link group type indicates that the identifier is an identifier of a link group, that is, one link group type + an integer value may indicate that an outgoing interface or an incoming interface of a packet is a link group.

Optionally, the first network device may establish the first link group before step 220. Specifically, the first network device may receive at least two messages sent by a neighbor of the first network device through the at least two parallel links, respectively, where the at least two messages include IDs of the neighbor; the first network device determines that the IDs of the neighbors included in the at least two messages are the same, and establishes a first link group including at least two parallel links. As an example, the message may be a PIM hello message, for example.

Optionally, before step 220, the first network device may further establish the multicast forwarding entry. Specifically, the first network device may receive a message sent by a neighbor of the first network device, where the message includes a first identifier of the first multicast packet. The message may be, for example, a PIM multicast join message. The first network device may determine that a link receiving a message sent by a neighbor of the first network device is one link in the first link group, and establish the multicast forwarding entry.

Step 230: the first network equipment selects a first link from at least two parallel links in the first link group to send a first multicast message.

The first link is one of at least two parallel links in the first link group.

As an example, in this embodiment, the first network device may determine at least two parallel links according to the identifier of the first link group, and select the first link from the at least two parallel links to send the first multicast packet.

In the above technical solution, in a scenario where a plurality of different parallel links are used to perform load sharing on a multicast packet sent between network devices, if at least one link included in a link group performing load sharing fails or recovers, only the link group needs to be refreshed, and a multicast forwarding table entry recording a correspondence between a link and multicast traffic in the prior art does not need to be refreshed, so that convergence time of a multicast service can be shortened.

Optionally, in some embodiments, when the state of the first link is unavailable, the first network device selects a second link, which is other than the first link, from the at least two parallel links, and sends the first multicast packet through the second link.

Optionally, in some embodiments, the first network device selects a first link from the at least two parallel links according to the feature information of the first multicast packet, and sends the first multicast packet through the first link.

It should be understood that, the present application does not specifically limit the feature information of the first multicast packet, and may include one or more of the following: the source address information of the first multicast message, the destination address information of the first multicast message, the source address and the destination address information of the first multicast message, the hash result information corresponding to the source address of the first multicast message, the hash result information corresponding to the destination address information of the first multicast message, and the hash result information corresponding to the source address and the destination address information of the first multicast message.

Optionally, in some embodiments, the first network device may obtain the first link in the first link group by performing modulo operation on the feature information of the first multicast packet according to the number of links in the first link group whose state is available, and send the first multicast packet through the first link.

A possible implementation process of the method for multicast packet load sharing provided in this embodiment is described in detail below with reference to the specific example in fig. 3 by taking the scenario shown in fig. 1 as an example. It should be understood that the example of fig. 3 is only for assisting the skilled person in understanding the embodiments of the present application, and is not intended to limit the embodiments of the present application to the specific values or specific scenarios of fig. 3. It will be apparent to those skilled in the art from the examples given that various equivalent modifications or variations can be made, and such modifications and variations also fall within the scope of the embodiments of the application.

Fig. 3 is a schematic flowchart of another method for sharing multicast packet load according to an embodiment of the present application. As shown in FIG. 3, the method may include steps 310-350, which are described in detail below in relation to steps 310-350, respectively.

Step 310: the router R1 and the router R1 establish a link group table item.

In the embodiment of the present application, the 3 parallel links between the router R1 and the router R2 may be configured or automatically determined or identified as a "parallel link group".

For the configuration on the router R1, the expression "Pim-Hello router-id 1.1.1.1" indicates that the identifier (router-id) of the router R1 carried by the router R1 when sending the Pim Hello message to the outside is 1.1.1.1. "Pim-Hello Router-id enable" indicates that the PIM interface of the enabled Router R1 can carry the Router-id value of Router R1 when sending the PIM Hello message. "Pim load-balance link-group auto-generation" indicates that a link group is automatically generated for a parallel link, and in the example of the router R1, a parallel link group and an ID1 of the parallel link group are automatically generated for the case of 3 parallel links between the router R1 and the router R2. "Interface gi 1/0/1" indicates an Interface on the router R1 to which a multicast recipient is connected, and its IP address is 192.168.1.1. "Interface gi 2/0/1" represents the ingress Interface on router R1 to link 1 connected to R2 with its IP address of 192.168.11.101. "Interface gi 2/0/2" represents the ingress Interface on router R1 to link 2 connected to R2 with its IP address of 192.168.12.102. "Interface gi 2/0/3" represents the ingress Interface on router R1 to link 3 connected to R2 with its IP address of 192.168.13.103.

It should be understood that the router-ID may identify an identification ID of a router node, i.e., a network device.

It should be understood that for router R1, the interface on router R1 that sends the multicast join message to router R2 is a member of the parallel link group, and then the multicast ingress interface established on router R1 is an ingress interface of a parallel link group.

It should also be understood that in the embodiments of the present application, "member" may refer to an interface where a network device in a load sharing group is connected to a load sharing link. The interfaces include ethernet ports, for example, in fig. 1, 3 ports of each of the router R1 and the router R2 are members of a load sharing group. Further, router R1 may include another interface, in which case router R1 includes 4 load sharing group members.

For the configuration on the router R2, where "Pim-Hello router-id 2.2.2.2" indicates that the router-id of the router R2 carried by the router R1 when sending out the Pim Hello message is 2.2.2.2. "Pim-Hello Router-id enable" indicates that the PIM interface of Router R2 is enabled to carry the Router-id value of Router R2 when sending the PIM Hello message. "Pim load-balance link-group auto-generation" indicates that a link group is automatically generated for a parallel link, and in the example of the router R2, a parallel link group and an ID2 of the parallel link group are automatically generated for the case of 3 parallel links between the router R2 and the router R1. "Interface gi 2/0/1" represents the outgoing Interface on router R2 with Link 1 connected to R1 and has an IP address of 192.168.11.201. "Interface gi 2/0/2" represents the outgoing Interface on router R2 with Link 2 connected to R1 and has an IP address of 192.168.12.202. "Interface gi 2/0/3" represents the outgoing Interface on router R2 with Link 3 connected to R1 and has an IP address of 192.168.13.203. "Interface gi 3/0/1" indicates an Interface of router R2 connected to router R3, and its IP address is 192.168.23.201.

For the configuration on the router R3, where the link Interface of the router R2 is connected to the "Interface gi 3/0/1" router R3, its IP address is 192.168.23.231.

An example is described in detail for the specific implementation process of the router R1 to establish the link group table entry.

The router R2 may send a PIM Hello message carrying the router-id value 2.2.2.2 of the router R2 to the router R1 according to the above configuration. The router R1 establishes the following correspondence relationship according to the PIM Hello message sent by the router R2.

(Interface 2/0/1, neighbor (Nbr) 192.168.11.201, router-id 2.2.2.2)

(Interface 2/0/2,Nbr=192.168.12.202,router-id=2.2.2.2)

(Interface 2/0/3,Nbr=192.168.13.203,router-id=2.2.2.2)

Router R1 may generate a link set for each router-id for messages from the same node (different nodes may be distinguished by the router-id) sent over different interfaces. The link table entries in the link group established at router R1 are shown below.

Link table entry 1 in the link group: (Link group ID1, Interface gi2/0/1)

(Link group ID1, Interface gi2/0/2)

(Link group ID1, Interface gi2/0/3)

The link group ID1 may have a one-to-one correspondence with the router-ID, for example, the ID1 may be directly obtained from the router-ID value, or may be a correspondence different from the router-ID value and established as follows. The correspondence may also be referred to as a link set entry.

Link group table entry 1: (Link group ID1, router-ID 2.2.2.2)

For another example, a detailed description is given of a specific implementation process of the router R2 for establishing the link group entry.

The router R1 may send a PIM Hello message carrying the router-id value 1.1.1.1 of the router R1 to the router R2 according to the above configuration. The router R2 establishes the following correspondence relationship according to the PIM Hello message sent by the router R1.

(Interface 2/0/1,Nbr=192.168.11.101,router-id=1.1.1.1)

(Interface 2/0/2,Nbr=192.168.12.102,router-id=1.1.1.1)

(Interface 2/0/3,Nbr=192.168.13.103,router-id=1.1.1.1)

Router R2 may generate a link set for each router-id for messages from the same node (different nodes may be distinguished by the router-id) sent over different interfaces. The link table entries in the link group established at router R2 are shown below.

Link table entry 2 in the link group: (Link group ID2, Interface gi2/0/1)

(Link group ID2, Interface gi2/0/2)

(Link group ID2, Interface gi2/0/3)

The link group ID2 may have a one-to-one correspondence with the router-ID, for example, the ID2 may be directly obtained from the router-ID value, or may be a correspondence different from the router-ID value and established as follows. The correspondence may also be referred to as a link set entry.

Link group table entry 2: (Link group ID2, router-ID ═ 1.1.1.1)

For another example, for router R3, router R3 may receive a PIM Hello message sent by router R2 through interfaces gi3/0/1, where the PIM Hello message carries the router-id value 2.2.2.2 of router R2. Since neither router-id is configured in the configuration of router R3, nor is the PIM interface of router R3 configured to enable carrying the router-id value of router R3 when sending the PIM Hello message. Thus, router R3 may choose to ignore router-id value 2.2.2.2 of router R2 and instead process it as a normal PIM Hello message.

The router-id value carried by the router R1 when sending the PIM Hello message and the router-id value carried by the router R2 when sending the PIM Hello message may be a 32-bit value. For example, under an internet protocol version 4 (IPv 4) network, the router-id value may be a 32-bit value identical to the IP address of the loopback (loopback) port. For another example, the router-id value may be a 32-bit value in an internet protocol version 6 (IPv 6) network, and it has no relation with the IPv6 address of the loopback port.

Step 315: the router R1 receives the multicast join message sent by the multicast receiver, and establishes SG multicast forwarding table item 1.

The multicast receiver sends a multicast source group (S, G) join message to the router R1, and after receiving the multicast source group (S, G) join message, the router R1 may query the next hop and the egress interface of the unicast route according to S. When the next hop interface is any one of gi2/0/1, gi2/0/2, or gi2/0/3, the router R1 may send a multicast source group (S, G) join message to the router R2 through the next hop interface.

Router R1 also establishes multicast forwarding entry 1 as follows:

(S, G), input interface flag (input interface flag, IIFFlag) < link group >, Input Interface (IIF) ═ link group ID1, output interface flag (output interface flag, OIFFlag) < null >, output interface list (output interface list, OIFList) ═ gi1/0/1)

It should be understood that, in the embodiment of the present application, the multicast forwarding table entry 1 is used for the router R1 to forward the multicast traffic received from the router R2. In the multicast forwarding table entry 1, IIFFlag indicates that an incoming interface of the multicast traffic established on the router R1 is a parallel link group, and IIF indicates that the identifier of the parallel link group is ID 1. OIFFlag < null > indicates that the outgoing interface of the router R1 forwarding the received multicast traffic is not a link group, and OIFList indicates that the outgoing interface of the router R1 forwarding the received multicast traffic is the interface gi 1/0/1.

In the embodiment of the present application, the forwarding table entry established on the router R1 is as follows.

SG multicast forwarding table entry 1:

(S, G), IIFFlag < link group >, IIF ═ link group ID1, OIFFlag < null >, OIFList ═ gi1/0/1)

Link table entry 1 in the link group:

(Link group ID1, Interface gi2/0/1)

(Link group ID1, Interface gi2/0/2)

(Link group ID1, Interface gi2/0/3)

Link group table entry 1:

(Link group ID1, router-ID 2.2.2.2)

As an example, the identifier of the first link group may be IIFFlag and IIF in the SG multicast forwarding table entry 1, where IIFFlag is used to indicate that the incoming interface of the multicast traffic established on the router R1 is a parallel link group. The IIF is used to indicate that the parallel link group is identified as ID1, and ID1 may identify that the three parallel links are included in the first link group.

Alternatively, in some embodiments, when there are multiple multicast entries, for example, 30 multicast entries, the forwarding entries established on the router R1 are as follows.

Step 320: the router R2 receives the multicast join message sent by the router R1, and establishes an SG multicast forwarding table entry 2.

The router R1 sends a multicast source group (S, G) join message to the router R2, and after the router R2 receives the multicast source group (S, G) join message, if the router R2 determines that the received multicast source group (S, G) join message is from any one of gi2/0/1, gi2/0/2, or gi2/0/3, the router R2 may establish the following SG multicast forwarding table entry 2:

(S, G), IIFFlag < null >, IIF Interface gi3/0/1, OIFFlag < Link group >, OIFList link group ID2)

It should be understood that, in the embodiment of the present application, the SG multicast forwarding entry 2 is used for the router R2 to forward the multicast traffic received from the router R3. In the SG multicast forwarding entry 2, IFFlag indicates that an ingress Interface of multicast traffic established on the router R2 is not a link group, and IIF indicates that an ingress Interface of the router R2 receiving multicast traffic is Interface gi 3/0/1. OIFFlag indicates that the outgoing interface of the multicast traffic established on the router R2 is a parallel link group, and OIFList indicates that the identifier of the parallel link group is ID 2.

In the embodiment of the present application, the forwarding table entry established on the router R2 is as follows.

Multicast forwarding entry 2:

(S, G, IIFFlag < null >, IIF Interface gi3/0/1, OIFFlag < Link group >, OIFList link group ID2)

Link table entry 2 in the link group:

(Link group ID2, Interface gi2/0/1)

(Link group ID2, Interface gi2/0/2)

(Link group ID2, Interface gi2/0/3)

Link group table entry 2:

(Link group ID2, router-ID ═ 1.1.1.1)

Alternatively, in some embodiments, when there are multiple multicast entries, for example, 30 multicast entries, the forwarding entries established on the router R2 are as follows.

Step 325: the router R3 receives the multicast join message sent by the router R2 and sends the multicast join message to the multicast source.

Step 330: after receiving the multicast join message, the multicast source sends the multicast traffic to the router R3.

Step 335: the router R3 sends the multicast traffic to the router R2 through the link Interface gi3/0/1 connected to the router R2.

Step 340: after receiving the multicast traffic sent by the router R3, the router R2 sends the multicast traffic to the router R1 according to the established forwarding table.

After receiving the multicast traffic sent by the router R3, the router R2 may forward the multicast traffic according to the established forwarding table entry.

Specifically, as an example. After receiving the multicast traffic from the link Interface gi3/0/1, the router R2 may determine that the output Interface of the multicast traffic is a parallel link group according to the OIFFlag in the SG multicast forwarding table entry 2. And searching a link table entry 2 in the link group according to the OIFList, and selecting one interface from a plurality of outgoing interfaces in the link table entry 2 in the link group to send the multicast traffic to the router R1.

It should be understood that there are various implementations of selecting one of the multiple outgoing interfaces in the link table entry 2 in the link group to send the multicast traffic to the router R1 in the embodiment of the present application. In a possible implementation manner, the router R2 may perform hash calculation on the multicast source S and the multicast group G to obtain a hash calculation result, and then perform modulo (also referred to as remainder) according to the number of link interfaces whose states are available in the link group ID 2. The available link interfaces in the link table entry 2 in the link group are 3, which are respectively an interface gi2/0/1, an interface gi2/0/2 and an interface gi 2/0/3. When the result of the modulo is 0/1/2, the router R2 may forward the multicast traffic to the router R1 through the interfaces gi2/0/1, gi2/0/2, and gi2/0/3, respectively.

There are various specific implementations of the router R2 determining the link interface status in the link group ID2 in the embodiment of the present application. For convenience of description, the following takes the determination of the corresponding state of the interface gi2/0/1 of link 1 in the link group ID2 as an example.

Specifically, as an example, the interface of the link 1 connected to the router R1 in the router R2 enables the PIM protocol, and sends a packet, for example, a PIM HELLO packet, to the interface of the link 1 connected to the router R2 in the router R1. Similarly, the interface of link 1 of router R2 in router R1 enables the PIM protocol and sends messages, e.g., PIM HELLO messages, to the interface of link 1 of router R1 in router R2. If the router R1 can receive the message sent by the router R2, the router R1 can understand that the interface state of the link 1 connected with the router R2 is the available state. If the router R1 does not receive the message sent by the router R2, the router R1 can understand that the interface state of the link 1 connected with the router R2 is the unavailable state.

As another example, Bidirectional Forwarding Detection (BFD) may be further disposed between router R1 and router R2, router R1 may determine an interface state of link 1 in router R1 connected to router R2 according to a BFD detection result, and similarly, router R2 may also determine an interface state of link 1 in router R2 connected to router R1 according to a BFD detection result. For example, BFD detection messages are sent at regular intervals between router R1 and router R2. If the interface of link 1 in router R1 can receive the BFD detection packet sent by the interface of link 1 in router R2 at a time interval, router R1 may understand that the link 1 link interface state of router R2 connected to router R1 is an available state. If the interface of link 1 in router R1 does not receive the BFD detection packet sent by the interface of link 1 in router R2 at a time interval, router R1 may understand that the link 1 link interface connected to router R1 in router R2 is in an unavailable state.

Step 345: after receiving the multicast traffic sent by the router R2, the router R1 forwards the multicast traffic according to the established forwarding table.

Take the example that the router R1 receives the multicast traffic sent by the router R2 through the interface gi 2/0/1. The router R1 may determine whether the multicast traffic can be checked by Reverse Path Forwarding (RPF) according to whether an interface actually receiving the multicast traffic is consistent with an interface represented by the IIF field in the SG multicast forwarding entry.

The multicast routing protocol determines the upstream neighbor equipment and the downstream neighbor equipment through the existing unicast routing information, and creates a multicast routing table entry. By using the RPF checking mechanism, the multicast data stream can be ensured to be correctly transmitted along the multicast distribution tree (path), and the generation of a loop on a forwarding path can be avoided.

In the embodiment of the present application, on one hand, the router R1 receives the multicast traffic sent by the router R2 through the interface gi2/0/1, determines that an incoming interface of the multicast traffic is the parallel link group ID1 according to the IIFFlag in the SG multicast forwarding table entry 1, and determines that the interface gi2/0/1 belongs to one link interface in the link group ID1 according to the link table entry 1 in the link group. On the other hand, the router R1 determines, from the IIF in the SG multicast forwarding entry 1, that it is the link group ID1 that is expected to receive the multicast traffic sent by the router R2. Therefore, the multicast traffic received by the router R1 through the interface gi2/0/1 and sent by the router R2 can pass the RPF check, so that the multicast traffic can be forwarded to the multicast receiver, for example, the router R1 can forward the multicast traffic to the multicast receiver through the interface gi1/0/1 connected to the multicast receiver.

For the router R1, whether the multicast traffic sent to the router R2 is received from the link 1, the link 2 or the link 3, the multicast traffic can pass the RPF check and be sent to the downstream outgoing interface of the router R1.

Step 350: when a link fails or recovers in a parallel link group comprising at least two links between the router R1 and the router R2, the router R1 and the router R2 realize the convergence of the multicast service by refreshing the link table entries in the link group.

As an example, a failure occurs with link 1 in the parallel link group. When the failure of the link 1 occurs, the router R1 checks the status of whether there are other links available in the parallel link group to which the link 1 belongs. If there are other links available in the parallel link group to which the link 1 belongs, the router R1 node does not send the multicast join message or the multicast exit message to the router R2 any more.

The router R1 and the router R2 may flush the link table entries in the link group, and the link table entries in the link group refreshed on the router R1 and the router R2 are as follows.

For the router R2, after receiving the multicast traffic sent by the router R3, the router R2 may forward the multicast traffic according to the forwarding table entry. The router R2 may perform hash calculation on the multicast source S and the multicast group G to obtain a hash calculation result, and then perform modulo (also referred to as remainder) according to the number of link interfaces whose states are available in the link group ID 2. The available link interfaces in the link table entry 2 in the link group are 2, namely, interfaces gi2/0/2 and interfaces gi 2/0/3. When the result of the modulo is 0/1, the router R2 may forward the multicast traffic to the router R1 through the interfaces gi2/0/2 and gi2/0/3, respectively.

For the router R1, whether receiving the multicast traffic sent by the router R2 from the interface gi2/0/2 or from the interface gi2/0/3, since the interface gi2/0/2 and the interface gi2/0/3 belong to the link group ID1, the multicast traffic sent by the router R2 received by the router R1 through the interface gi2/0/2 or the interface gi2/0/3 can pass the RPF check, so that the multicast traffic can be forwarded to the multicast receiver.

As another example, a failure occurs with link 1 and link 2 in the parallel link group. When link 1 and link 2 fail, router R1 checks the status of whether there are more links available in the parallel link group to which link 1 and link 2 belong. If there are other links in the parallel link group to which link 1 and link 2 belong that are available, then the router R1 node may no longer send multicast join messages or multicast exit messages to router R2.

The router R1 and the router R2 may flush the link table entries in the link group, and the link table entries in the link group refreshed on the router R1 and the router R2 are as follows.

For the router R2, after receiving the multicast traffic sent by the router R3, the router R2 may forward the multicast traffic according to the forwarding table entry. The router R2 may perform hash calculation on the multicast source S and the multicast group G to obtain a hash calculation result, and then perform modulo (also referred to as remainder) according to the number of link interfaces whose states are available in the link group ID 2. The available link interfaces in the link table entry 2 in the link group are 1, and are interfaces gi 2/0/3. When the result of the modulo is 0, the router R2 may forward the multicast traffic to the router R1 through the interface gi 2/0/3.

For the router R1, since the interface gi2/0/3 belongs to the link group ID1, the multicast traffic sent by the router R2 received by the router R1 through the interface gi2/0/3 can pass the RPF check, so that the multicast traffic can be forwarded to the multicast receiver.

Therefore, in the case of a link failure, neither the router R1 nor the router R2 needs to refresh the SG multicast forwarding entries, but only needs to refresh the link entries in the link group. The convergence speed is irrelevant to the number of SG multicast forwarding table entries, so that the convergence time of the multicast service can be shortened.

As another example, link 1 and link 2 in the parallel link group are restored from the failure state to the normal state. When link 1 and link 2 fail, router R1 checks the status of whether there are more links available in the parallel link group to which link 1 and link 2 belong. If there are other links in the parallel link group to which link 1 and link 2 belong that are available, then the router R1 node may no longer send multicast join messages or multicast exit messages to router R2.

The router R1 and the router R2 may flush the link table entries in the link group, and the link table entries in the link group refreshed on the router R1 and the router R2 are as follows.

For the router R2, after receiving the multicast traffic sent by the router R3, the router R2 may forward the multicast traffic according to the forwarding table entry. The router R2 may perform hash calculation on the multicast source S and the multicast group G to obtain a hash calculation result, and then perform modulo (also referred to as remainder) according to the number of link interfaces whose states are available in the link group ID 2. The available link interfaces in the link table entry 2 in the link group are 3, which are respectively an interface gi2/0/1, an interface gi2/0/2 and an interface gi 2/0/3. When the result of the modulo is 0/1/2, the router R2 may forward the multicast traffic to the router R1 through the interfaces gi2/0/1, gi2/0/2, and gi2/0/3, respectively.

For the router R1, no matter the multicast traffic sent by the router R2 is received from the interface gi2/0/1, the interface gi2/0/2 or the interface gi2/0/3, since the interface gi2/0/1, the interface gi2/0/2 and the interface gi2/0/3 belong to the link group ID1, the multicast traffic sent by the router R2 received by the router R1 through the interface gi2/0/1, the interface gi2/0/2 or the interface gi2/0/3 can pass the RPF check, so that the multicast traffic can be forwarded to the multicast receiver.

Therefore, when the link is restored from the failure state to the normal state, neither the router R1 nor the router R2 needs to refresh the SG multicast forwarding entries, but only needs to refresh the link entries in the link group. The convergence speed is irrelevant to the number of SG multicast forwarding table entries, so that the convergence time of the multicast service can be shortened.

The method for sharing multicast packet load provided in the embodiment of the present application is described in detail above with reference to fig. 1 to fig. 3, and an embodiment of the apparatus of the present application is described in detail below with reference to fig. 4 to fig. 10. It is to be understood that the description of the method embodiments corresponds to the description of the apparatus embodiments, and therefore reference may be made to the preceding method embodiments for parts not described in detail.

Fig. 4 is a schematic structural diagram of a first network device 400 according to an embodiment of the present application. The first network device 400 shown in fig. 4 may perform the corresponding steps performed by the first network device in the methods of the above embodiments. As shown in fig. 4, the first network device 400 includes: the receiving module 410, the determining module 420 and the selecting module 430.

It is to be understood that the first network device 400 may perform the respective steps performed by the first network device in the methods of the above embodiments, for example the respective steps performed by the first network device in the method of fig. 2. Specifically, the receiving module 410 may implement the method flow of step 210 in fig. 2, and is configured to receive the first multicast packet; determining module 420 may implement the method flow of step 220 in fig. 2, configured to determine, according to a multicast forwarding entry, a first link group corresponding to the first multicast message; the selecting module 430 may implement the method flow of step 230 in fig. 2, for selecting the first link to send the first multicast packet.

Optionally, internet protocol IP addresses of corresponding physical interfaces between the at least two parallel links are different.

Optionally, the multicast forwarding table entry includes a correspondence between a first identifier of the first multicast packet and an identifier of the first link group;

the determining module 420 is specifically configured to obtain the first identifier from the first multicast packet through the obtaining module 450; and determining the first link group according to the multicast forwarding table entry and the first identifier.

Optionally, the selecting module 430 is specifically configured to determine the at least two parallel links according to the identifier of the first link group; and selecting the first link from the at least two parallel links to send the first multicast message.

Optionally, the selecting module 430 is further configured to select a second link from the at least two parallel links except the first link when the state of the first link is unavailable, and send the first multicast packet through the second link.

Optionally, the receiving module 410 is further configured to receive at least two messages sent by the second network device through each of the at least two parallel links, where the message sent by each link includes an identification ID of the second network device;

the determining module 420 is further configured to determine that the IDs of the second network devices included in each of the at least two messages are the same;

the first network device 400 further includes:

an establishing module 440, configured to establish the first link group including the at least two parallel links based on the ID of the second network device. Further, the first network device establishes a first link group based on the ID of the second network device, acquires the first link group ID, establishes a correspondence between the first link group and the ID of the second network device, and establishes a correspondence between the first link group and at least two parallel links.

Optionally, the selecting module 430 is specifically configured to select a first link from the at least two parallel links according to the feature information of the first multicast packet, and send the first multicast packet through the first link.

Fig. 5 is a schematic structural diagram of another first network device 500 provided in the embodiment of the present application. As shown in fig. 5, the first network device 500 includes: a receiving module 510 and a establishing module 520.

It is to be understood that the first network device 500 may perform the corresponding steps performed by the first network device in the methods of the above embodiments.

Specifically, as an example, the receiving module 510 in the first network device 500 is configured to receive the first multicast join message through the first link; the establishing module 520 is configured to establish the multicast forwarding table entry.

The first multicast join message includes a first identifier of a first multicast packet, and the first link is a link in a first link group. The multicast forwarding table entry includes a corresponding relationship between the first identifier and an identifier of a first link group, the first link group includes at least two parallel links between the first network device and the second network device, and the at least two parallel links are different.

Optionally, the receiving module 510 further receives a second multicast join message through a second link, where the second multicast join message includes a second identifier of a second multicast packet, and the second link is a link in a second link group;

the establishing module 520 is further configured to establish the multicast forwarding entry, where the multicast forwarding entry further includes a correspondence between a second identifier and an identifier of a second link group.

Optionally, the receiving module 510 is further configured to receive the first multicast packet; the first network device 500 further comprises a determining module 530, a selecting module 540. The determining module 530 is configured to determine, according to the multicast forwarding entry, a first link group corresponding to the first multicast message; the selecting module 540 is configured to select a second link to send the first multicast packet, where the second link is one of the at least two parallel links.

Optionally, the determining module 530 is specifically configured to obtain the first identifier from the first multicast packet through the obtaining module 550; and determining the first link group according to the multicast forwarding table entry and the first identifier.

Optionally, the determining module 530 is specifically configured to determine the at least two parallel links according to the identifier of the first link group; and selecting the first link from the at least two parallel links to send the first multicast message.

Optionally, the selecting module 540 is further configured to select a third link from the at least two parallel links except the second link when the interface state of the second link is unavailable, and send the first multicast packet through the third link.

Optionally, the receiving module 510 is further configured to receive at least two messages sent by the second network device through each of the at least two parallel links, where the message sent by each link includes an identification ID of the second network device;

the determining module 530 is further configured to determine that the IDs of the second network devices included in each of the at least two messages are the same;

the establishing module 520 is further configured to establish the first link group including the at least two parallel links based on the ID of the second network device and a corresponding relationship between the IDs of the first link group and the second network device.

Optionally, the selecting module 540 is specifically configured to select the second link from the at least two parallel links according to the feature information of the first multicast packet, and send the first multicast packet through the second link.

Fig. 6 is a schematic hardware configuration diagram of the first network device 2000 according to an embodiment of the present application. The first network device 2000 shown in fig. 6 may perform the corresponding steps performed by the first network device in the methods of the above embodiments.

As shown in fig. 6, the first network device 2000 includes a processor 2001, a memory 2002, an interface 2003, and a bus 2004. Wherein the interface 2003 may be implemented by wireless or wired means, specifically a network card. The processor 2001, the memory 2002, and the interface 2003 are connected by a bus 2004.

It is to be understood that the first network device 2000 may perform the respective steps performed by the first network device in the methods of the above embodiments, for example, the respective steps performed by the first network device in the method of fig. 2.

As an example, the processor 2001 is configured to: receiving a first multicast message; determining a first link group corresponding to the first multicast message according to the multicast forwarding table entry; and selecting a first link to send the first multicast message.

The first link group comprises at least two parallel links between the first network device and a second network device, the second network device is a neighbor of the first network device, and the at least two parallel links are different. The first link is one of the at least two parallel links.

Specifically, the interface 2003 in the first network device 2000 may implement the method flow of step 210 in fig. 2, and is configured to receive the first multicast packet; processor 2001 in first network device 2000 may implement the method flow of step 220 in fig. 2, configured to determine, according to the multicast forwarding entry, a first link group corresponding to the first multicast message; the processor 2001 in the first network device 2000 may further implement the method flow of step 230 in fig. 2, configured to select the first link to send the first multicast packet.

The interface 2003 may specifically include a transmitter and a receiver for the first network device to implement the transceiving described above. For example, the interface 2003 is used to receive a first multicast packet. For another example, the interface 2003 is used to send the first multicast packet. For another example, the interface 2003 is used to receive messages sent by neighbors of the first network device.

The processor 2001 is configured to execute the processing performed by the first network device in the above embodiments. For example, a first link group corresponding to the first multicast message is determined according to a multicast forwarding entry. If so, establishing a multicast forwarding table entry; and/or other processes for the techniques described herein. The processor 2001 is used to support the steps 220, 230 in fig. 2, by way of example. The memory 2002 includes an operating system 20021 and an application 20022 for storing programs, codes, or instructions that when executed by a processor or hardware device may perform the processes of the method embodiments involving the first network device. Alternatively, the memory 2002 may include a read-only memory (ROM) and a Random Access Memory (RAM). Wherein the ROM includes a basic input/output system (BIOS) or an embedded system; the RAM includes an application program and an operating system. When the first network device 2000 needs to be operated, the first network device 2000 is booted to enter a normal operation state by booting through a BIOS that is solidified in a ROM or a bootloader boot system in an embedded system. After the first network device 2000 enters the normal operation state, the application program and the operating system that are run in the RAM are executed, thereby completing the processes related to the first network device 2000 in the method embodiment.

It will be appreciated that fig. 6 only shows a simplified design of the first network device 2000. In practical applications, the first network device may comprise any number of interfaces, processors or memories.

Fig. 7 is a schematic hardware configuration diagram of another first network device 2100 according to an embodiment of the present application. The first network device 2100 shown in fig. 7 may perform the corresponding steps performed by the first network device in the methods of the above embodiments.

As illustrated in fig. 7, the first network device 2100 includes: a main control board 2110, an interface board 2130, a switch board 2120 and an interface board 2140. The main control board 2110, the interface boards 2130 and 2140, and the switch board 2120 are connected to the system backplane through the system bus to realize intercommunication. The main control board 2110 is used for completing functions such as system management, device maintenance, and protocol processing. The switch network board 2120 is used to complete data exchange between interface boards (interface boards are also called line cards or service boards). The interface boards 2130 and 2140 are used to provide various service interfaces (e.g., POS interface, GE interface, ATM interface, etc.) and implement forwarding of packets.

It is to be understood that the first network device 2100 may perform the respective steps performed by the first network device in the methods of the above embodiments, e.g., the respective steps performed by the first network device in the method of fig. 2.

Specifically, the interface board 2130 may implement the method flow of step 210 in fig. 2, and is configured to receive the first multicast packet; the main control board 2110 may implement the method flow of step 220 in fig. 2, and is configured to determine, according to a multicast forwarding entry, a first link group corresponding to the first multicast message; the main control board 2110 may implement the method flow of step 230 in fig. 2, for selecting the first link to transmit the first multicast packet.

The interface board 2130 may include a central processor 2131, a forwarding entry memory 2134, a physical interface card 2133, and a network processor 2132. The central processing unit 2131 is used for controlling and managing the interface board and communicating with the central processing unit on the main control board. Forwarding entry memory 2134 is used to store entries, such as the multicast forwarding entries described above. Physical interface card 2133 is used to complete the reception and transmission of traffic.

Specifically, the physical interface card 2133 is configured to receive a first multicast packet, and after receiving the first multicast packet, the physical interface card 2133 sends the first multicast packet to the central processing unit 2111 via the central processing unit 2131, and the central processing unit 2111 processes the first multicast packet.

It should be understood that operations on the interface board 2140 in the embodiment of the present application are the same as those of the interface board 2130, and are not described again for brevity. It should be understood that the first network device 2100 in this embodiment may correspond to the functions and/or various steps of the foregoing method embodiments, and are not described herein again.

In addition, it should be noted that there may be one or more main control boards, and when there are multiple main control boards, the main control board may include an active main control board and a standby main control board. The interface board may have one or more boards, and the more the data processing capability of the first network device is, the more interface boards are provided. There may also be one or more physical interface cards on an interface board. The exchange network board may not have one or more blocks, and when there are more blocks, the load sharing redundancy backup can be realized together. Under the centralized forwarding architecture, the first network device may not need the switching network board, and the interface board undertakes the processing function of the service data of the whole system. Under the distributed forwarding architecture, the first network device may have at least one switching network board, and data exchange between the plurality of interface boards is realized through the switching network board, so as to provide large-capacity data exchange and processing capability. Therefore, the data access and processing capabilities of the first network device of the distributed architecture are greater than those of the centralized architecture. Which architecture is specifically adopted depends on the specific networking deployment scenario, and is not limited herein.

Fig. 8 is a schematic structural diagram of a second network device 800 according to an embodiment of the present application. The second network device 800 shown in fig. 8 may perform the corresponding steps performed by the second network device in the methods of the above embodiments. As shown in fig. 8, the second network device 800 includes: a receiving module 810, a determining module 820 and a sending module 830.

It is to be understood that the second network device 800 may perform the corresponding steps performed by the second network device in the methods of the above embodiments, such as the corresponding steps performed by the router R1 in fig. 3.

Specifically, as an example, the receiving module 810 in the second network device 800 is configured to receive a first multicast packet sent by a first network device through a first link; the determining module 820 is configured to determine that the first multicast packet forwards the RPF check through the reverse path according to that the first link is one link in the second link group; the sending module 830 is configured to forward the first multicast packet.

The first network device is a neighbor of the second network device, the first link is one link in a second link group, the second link group includes at least two parallel links between the first network device and the second network device, and the at least two parallel links are different.

Optionally, the determining module 820 is specifically configured to: determining the second link group corresponding to the first multicast message according to the multicast forwarding table entry; and determining that the first link is one link in a second link group, and determining that the first multicast message passes RPF check.

Optionally, the multicast forwarding entry includes a correspondence between a first identifier of the first multicast packet and an identifier of the second link group, and the determining module 820 is specifically configured to: acquiring a first identifier from the first multicast message; and determining the second link group according to the first identifier and the multicast forwarding table entry.

Optionally, the multicast forwarding entry further includes a correspondence between an identifier of the second link group and identifiers of at least two parallel links in the second link group, and the determining module 820 is specifically configured to: and determining that the first link is one link in the second link group according to the identifier of the second link group and the multicast forwarding table entry.

Optionally, the receiving module 810 is further configured to receive at least two messages sent by the first network device through each of the at least two parallel links, where the message sent by each link includes an identification ID of the first network device; the determining module 820 is further configured to determine that the IDs of the first network devices included in each of the at least two messages are the same;

the second network device 800 further comprises: an establishing module 840, configured to establish the second link group including the at least two parallel links based on the ID of the first network device and a corresponding relationship between the second link group and the ID of the first network device.

Fig. 9 is a schematic hardware structure diagram of a second network device 2200 according to an embodiment of the present application. The second network device 2200 shown in fig. 9 may perform the corresponding steps performed by the second network device in the method of the above-described embodiment.

As shown in fig. 9, the second network device 2200 includes a processor 2201, a memory 2202, an interface 2203, and a bus 2204. The interface 2203 may be implemented by wireless or wired means, and specifically may be a network card. The processor 2201, memory 2202, and interface 2203 are connected by a bus 2204.

It is to be understood that the second network device 2200 may perform the corresponding steps performed by the second network device in the methods of the above embodiments, such as the router R1 in fig. 3.

As an example, the processor 2201 in the second network device 2200 is specifically configured to: receiving a first multicast message sent by first network equipment through a first link; determining that the first multicast message is forwarded through a reverse path to perform RPF check according to the fact that the first link is one link in a second link group; and forwarding the first multicast message.

The interface 2203 may specifically include a transmitter and a receiver, which are used for the second network device to implement the above transceiving. For example, the interface is configured to support receiving a first multicast packet sent by a first network device through a first link. For another example, the interface 2203 is used for forwarding the first multicast packet.

The processor 2201 is configured to perform the processing performed by the second network device in the foregoing embodiment. For example, the RPF check module is configured to determine that the first multicast packet forwards the RPF check through a reverse path according to that the first link is one link in a second link group; and/or other processes for the techniques described herein. The memory 2202 includes an operating system 22021 and an application 22022 for storing programs, code, or instructions that when executed by a processor or hardware device may perform processes of the method embodiments involving the second network device. Optionally, the memory 2202 may include read-only memory (ROM) and Random Access Memory (RAM). Wherein the ROM includes a basic input/output system (BIOS) or an embedded system; the RAM includes an application program and an operating system. When the second network device 2200 needs to be operated, the second network device 2200 is booted to enter a normal operation state by booting through a BIOS that is solidified in a ROM or a bootloader boot system in an embedded system. After the second network device 2200 enters the normal operation state, the application program and the operating system that are run in the RAM are executed, thereby completing the processing procedures related to the second network device 2200 in the method embodiment.

It is to be appreciated that fig. 9 shows only a simplified design of the second network device 2200. In practical applications, the second network device may comprise any number of interfaces, processors or memories.

Fig. 10 is a schematic hardware structure diagram of another second network device 2400 according to an embodiment of the present application. The second network device 2400 shown in fig. 10 may perform the corresponding steps performed by the second network device in the methods of the above embodiments.

As illustrated in fig. 10, the second network device 2400 includes: main control board 2410, interface board 2430, switch board 2420 and interface board 2440. The main control board 2410, the interface boards 2430 and 2440, and the switch board 2420 are connected to the system backplane through the system bus to realize intercommunication. The main control board 2410 is configured to perform functions such as system management, device maintenance, and protocol processing. The switch board 2420 is used to complete data exchange between interface boards (interface boards are also called line cards or service boards). The interface boards 2430 and 2440 are used to provide various service interfaces (e.g., POS interface, GE interface, ATM interface, etc.) and implement forwarding of data packets.

It should be understood that the second network device 2400 may perform the corresponding steps performed by the second network device in the methods of the above embodiments, for example, the corresponding steps performed by the router R1 in fig. 3.

The interface board 2430 can include a central processor 2431, a forwarding entry memory 2434, a physical interface card 2433, and a network processor 2432. The central processing unit 2431 is used for controlling and managing the interface board and communicating with the central processing unit on the main control board. The forwarding entry store 2434 is used to store entries, such as the multicast forwarding entries described above. Physical interface card 2433 is used to complete the reception and transmission of traffic.

It should be understood that the operations on the interface board 2440 in the embodiment of the present application are consistent with the operations of the interface board 2430, and therefore, for brevity, detailed descriptions are omitted. It should be understood that the second network device 2400 of this embodiment may correspond to the functions and/or various steps implemented in the foregoing method embodiments, and are not described herein again.

In addition, it should be noted that there may be one or more main control boards, and when there are multiple main control boards, the main control board may include an active main control board and a standby main control board. The interface board may have one or more boards, and the more interface boards are provided the stronger the data processing capability of the second network device is. There may also be one or more physical interface cards on an interface board. The exchange network board may not have one or more blocks, and when there are more blocks, the load sharing redundancy backup can be realized together. Under the centralized forwarding architecture, the second network device may not need the switching network board, and the interface board undertakes the processing function of the service data of the whole system. Under the distributed forwarding architecture, the second network device may have at least one switching network board, and the switching network board realizes data exchange among the plurality of interface boards, thereby providing large-capacity data exchange and processing capability. Therefore, the data access and processing capabilities of the second network device in the distributed architecture are greater than those of the centralized architecture. Which architecture is specifically adopted depends on the specific networking deployment scenario, and is not limited herein.

The embodiment of the application also provides a system for sharing multicast message load, which comprises a first network device and a second network device. Wherein the first network device may perform corresponding steps performed by the first network device in the method of the above-described embodiment, for example, the first network device in the method of fig. 2 or the router R2 in fig. 3. The second network device may perform the corresponding steps performed by the second network device in the methods of the above embodiments, such as router R1 in fig. 3.

As an example, the first network device is to: receiving a first multicast message; determining a first link group corresponding to the first multicast message according to a multicast forwarding table entry, where the first link group includes at least two parallel links between the first network device and a second network device, the at least two parallel links are different, and the second network device is a neighbor of the first network device; and selecting a first link to send the first multicast message, wherein the first link is one of the at least two parallel links.

The second network device is to: receiving a first multicast message sent by a first network device through a first link, wherein the first network device is a neighbor of a second network device, the first link is one link in a second link group, the second link group comprises at least two parallel links between the first network device and the second network device, and the at least two parallel links are different; determining that the first multicast message forwards an RPF check through a reverse path for one link in the second link group according to the first link; and forwarding the first multicast message.

Embodiments of the present application also provide a computer-readable medium storing program code, which when run on a computer, causes the computer to perform the method in the above aspects. These computer-readable memories include, but are not limited to, one or more of the following: read-only memory (ROM), Programmable ROM (PROM), Erasable PROM (EPROM), Flash memory, Electrically EPROM (EEPROM), and hard drive (hard drive).

An embodiment of the present application further provides a computer program product, which is applied to a first network device, and the computer program product includes: computer program code which, when run by a computer, causes the computer to perform the method of any possible implementation of any of the above aspects.

An embodiment of the present application further provides a chip system, which is applied to a first network device, and the chip system includes: the chip system comprises at least one processor, at least one memory and an interface circuit, wherein the interface circuit is responsible for information interaction between the chip system and the outside, the at least one memory, the interface circuit and the at least one processor are interconnected through lines, and instructions are stored in the at least one memory; the instructions are executable by the at least one processor to perform the operations of the first network device in the methods of the various aspects described above.

In a specific implementation process, the chip may be implemented in the form of a Central Processing Unit (CPU), a Micro Controller Unit (MCU), a Micro Processing Unit (MPU), a Digital Signal Processor (DSP), a system on chip (SoC), an application-specific integrated circuit (ASIC), a Field Programmable Gate Array (FPGA), or a Programmable Logic Device (PLD).

The present invention also provides a computer program product, which is applied to a first network device, and includes a series of instructions, when executed, to perform the operations of the first network device in the method of the above aspects.

It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.

Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.

The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.

The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

39页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于服务网格的流量控制方法、系统、设备及介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!