SDN and NFV is the next phase of technology change which will help service provider to launch the services in single click. This is all about the programmability of the networks by using open source software defined network controller.
Showing posts with label MVPN. Show all posts
Showing posts with label MVPN. Show all posts
Sunday, May 18, 2014
mLDP Signalling: In Band and Out of Band
In my previous post, I discussed about the overall architecture of mLDP. This post is more focused on mLDP Signalling. mLDP signalling provides two functions
1. FEC Discovery for a MP LSP
2. Assigning multicast flow to a MP LSP
mLDP can use two signalling methods; In-band Signalling and Out-of-Band Signalling. FEC uniquely defines the MP LSP within the network by using combination of VPN-ID and it’s opaque value. The signalling maps the streams that will run over that MP LSP.
• In-Band Signalling
Opaque Value is used to map an MP LSP
Opaque value is derived from the multicast flow
• Out-Of-Band Signalling
Uses an overlay protocol to build the tree
Opaque value are assigned by statically configured
MP LSP creation is on-demand
In-Band Signalling Operation
It is called in-band signalling because the Egress PE uses the multicast stream information to create the Opaque Value and the Ingress PE uses this Opaque value to learn what multicast flow to send on the MP LSP.
1. The Egress PE receives a (S, G) IGMP join from a receiver. It creates a label mapping message containing the FEC TLV with the Opaque Value and a label TLV based on this information. The root address in the FEC Element will be derived from the BGP next-hop or (S). All ingress PE routers will create exactly the same FEC Element.
Debugs of inband signalling:-
mpls_ldp[1042]: DBG-mLDP[4142372544], Peer(9.9.0.7:0)/Root(9.9.0.3): ldp_mldp_sig_peer_msg_rx_nfn: 'Label-Mapping' msg (type 0x400), msg_id 167, lbl 23, MP2MP-Dn FEC { root 9.9.0.3, opaque-len 14 }, no MP-Status, status_code 0x0
2. The Egress PE then builds an MP LSP towards the Ingress LSP (root of the tree) using the label mapping message with the downstream label( lbl 23, MP2MP-Dn FEC { root 9.9.0.3, opaque-len 14 }). At each hop along the way, the P routers will use the same FEC Element, but the downstream label will change. When the ingress PE receives the label mapping message, it parses the Opaque Value to extract the multicast stream information and creates the appropriate (S, G) state and mapping information.
3. When the Ingress PE receives the (S, G} stream it will then forward it onto the correct MP LSP using it’s label information.
Out-of-Band (Overlay) Signalling Operation
Overlay protocol signals the mapping of the IP multicast flow to a MP LSP.
1. The Egress PE creates the FEC through a static procedure and builds the MP LSP hop-by-hop to the root based on FEC information.
2. Using overlay signalling, the Egress PE signals the Ingress PE to forward the IP multicast stream over the MP LSP with the unique FEC value.
3. The Ingress PE then forwards the (S, G) traffic onto the tree using the mapping information.
Click Here To Read Rest Of The Post...
Saturday, April 19, 2014
MVPN Over MLDP
In legacy MVPN scenarios there will be a GRE encapsulated MDT tunnel is created among the PEs and the entire customer multicast traffic of a particular VPN will be encapsulated with the MDT group address and tunneled it. In MLDP scenario there won't be any PIM configured in the core and the multicast packets are MPLS encapsulated. This makes the core PIM free.
In MLDP cases there will be a VPN-ID used instead of MDT default multicast group address. The default MDT will be created using MP2MP LSP to support low bandwidth and control traffics between the VRFs. The data MDT will be created using P2MP LSP to support high bandwidth traffic from a particular source. There will be PIM neighbor relationships between the VRFs which are seen in LSP-VIF. At the edge towards the customer (CE) PIM multicast routing will be enabled for that VRF.
The Default MDT will be created using MP2MP LSP. Static root has to be defined for MP2MP LSP. Normally two roots will be configured for redundancy. MDT will use LSP-VIF to transmit VPN multicast packets, i.e., PIM will run over MP2MP LSP.
In below depicted figure there will be a MP2MP LSP created with the root at PE1. The opaque value for MVPN is in the format of (VPN-ID). Both MP2MP upstream and downstream LSP will be formed and the labels will be exchanged for the data transfer.
!To create LSP in IOS
ip multicast vrf A mpls source Loopback0
ip vrf A
rd 65001:1
vpn id 65001:1 --- For creating Opaque value
route-target export 65001:1
route-target import 65001:1
mdt default mpls mldp 1.1.1.1 - Root node address
!To create LSP in IOS-XR
multicast-routing
vrf ABC
mdt default mldp ipv4 9.9.0.3
!
mpls ldp
router-id x.x.y.y
mldp
!
vrf ABC
vpn id 65000:1
address-family ipv4 unicast
!
LDP capabilities are extended to support multicast over LSP. LDP capabilities are advertised through LDP message TLVs during LDP initialization. mLDP defines two new FEC elements:-
1. P2MP : TLV 0x0508
2. MP2MP : TLV 0x0509
In mLDP, egress LSR initiates the tree creating by looking into root address. The root address is derived from BGP. Each LSR in the path resolves the next-hop address of root and send the label info. mLDP Control Plane:-
mLDP Data Plane:-
Click Here To Read Rest Of The Post...
Wednesday, January 9, 2013
Basics Of Multicast
The most popular and widely deployed multicast protocol is PIM, which is known as Protocol Independent Multicast (PIM). Unlike other multicast routing protocols such as Distance Vector Multicast Routing Protocol (DVRMP) or Multicast Open Shortest Path First (MOSPF), PIM does not maintain a separate multicast routing table, but relies on the existing IGP table when performing its Reverse Path Forwarding (RPF) check.
PIM can be configured as Dense Mode, Sparse Mode and Spare-Dense mode (Hybrid Mode).
PIM Dense Mode (PIM-DM)
PIM-DM uses a flood like broadcast and prune mechanism. When a source sends to an IP multicast group address, each router that receives the packet will create a (S, G) forwarding state entry. The receiving router will initially forward the multicast packet to output interfaces that meet the following requirements:
• Reverse Path Forwarding (RPF) check.
• Internet Group Membership Protocol (IGMP) receivers
To pass the RPF check, an incoming multicast packet must be received on an interface that the IGP routing table indicates the source (of the multicast packet) is reachable from.
Note that multicast enabled interfaces must have the corresponding unicast source routes in the IGP to avoid black holes. In the situation where equal cost paths exist, the unicast route with the highest upstream neighbor IP address is chosen. Also, when there are multiple routers sending on to the same subnet, a PIM assert process is triggered to elect a single designated router (DR).
(Design Considerations: How to select multicast group address)
When a state is created according to the RPF check, a source tree or shortest path tree (SPT) is developed with the source at the root or first hop router. Multicast packets following the tree take the optimal path through a network and packets are not duplicated over the same subnets.
Last hop routers with no receivers then prune back from the tree, however OIL in the upstream neighbour are maintained. These entries periodically (every 3minutes) move into a forwarding state and prune process re-occurs. PIM-DM is usually not suitable for a WAN environment and recommended for small and for LAN networks.
PIM Sparse Mode (PIM-SM) PIM-SM uses an explicit join model, where routers with active receivers will join multicast groups. This has advantages over the flood and prune mechanism as mentioned in PIM-DM. PIM-SM uses a control point known as Rendezvous Point or RP, a common point where all the sources register themselves first and all the receivers always comes first for the sources address.
(Multicast doesnot work with two loopbacks)
First hop designated routers (the routers with sources attached) register the sources to the RP. When the RP sees the source traffic coming in it will build an SPT back to the source, hence there will be (S, G) state entries between the RP and the source. The last hop designated routers (the routers with the receivers attached) join to the RP hop by hop, creating a shared tree (*, G) with the ‘*’ meaning any source.
When a source starts transmitting, the initial multicast traffic flows to the RP via a SPT then down to the receivers for that group via a shared tree (with the RP being the root). This may result in a non-optimal path being created to a receiver depending where the RP is positioned.
To address this problem, a mechanism known as SPT switch over can be used. The last hop router, depending on the traffic rate, sends and (S, G) join towards the source to create an optimal SPT forwarding path, and once established sends RP Prunes towards the RP. The decision to create an SPT to the source is dependant upon the SPT-threshold in terms of bandwidth.
PIM Sparse-Dense Mode
This mode is a combination of both previous modes. The decision to use sparse or dense mode for a particular multicast group depends on whether a group has a matching entry in the Group-to-RP mapping cache. If an entry exists in the cache, then that group is operates in sparse mode on that interface. If the multicast group does not have a corresponding entry in the mapping cache, then that group operates in dense mode.
This mode is required when using the Cisco Auto-RP mechanism to distribute Group-to-RP mappings.
Click Here To Read Rest Of The Post...
Thursday, April 29, 2010
Advantages of MPLS
A interview always start with the question called "Advantages of MPLS" and most of the time students don't know much about it and start fumbling with not good answers. I am giving few quick answers about MPLS advantages, if some is having other than that please share.
1. No propagation of routes in the core of service provider.
2. In legacy GRE customer is responsible for the management but in case of MPLS SP is responsible.
3. Customers can use the same ip address which is not possible in case of GRE.
4. MPLS saves cost as compared to ATM or frame relay network.
5. MPLS increase the response time.
6. Customers can have the QOS according to their requirements.
7. Fast reroute features by using traffic engineering.
8. MVPN support which saves lot of bandwidth.
Click Here To Read Rest Of The Post...
Monday, August 31, 2009
GDOI In MVPN
This is the excerpt from one of the cisco docs.
With Cisco IOS Secure Multicast, users can enjoy the benefits of encryption of “native IP multicast” traffic within their larger enterprise
environment. Cisco IOS Secure Multicast helps customers extend their reach to all of their corporate IP multicast applications, while providing
enhanced security. Having been tested with many applications and delivered across multiple platforms, Cisco IOS Secure Multicast enhances user
experience and efficiently secures multicast applications. The unique integration between GDOI and IPsec provides a level of trust on the corporate
internal network that is similar to the existing cryptographic techniques. This ability to provide a unique model differentiates Cisco Systems from its competitors.
Click Here To Read Rest Of The Post...
Wednesday, May 27, 2009
How To Maintain S,G For Long Time
Really a awesome command which can help to force S,G entry for a long period.
ip pim sparse sg-expiry-timer
regards
shivlu jain
Click Here To Read Rest Of The Post...
Monday, May 4, 2009
Deploying & Testing Of SSM in Service Provider Cloud
Implementation of SSM is really easy. I have already covered how to implement SSM in service proivder cloud. In this post, a basic test topology is used for a vrf customer which is using multicast at their end. The same stream need to be transported by mpls service proivder.
Implementation is fully covered in the documnet. Click here to download it.
regards
shivlu jain
Click Here To Read Rest Of The Post...
Sunday, April 26, 2009
Testing Multicast Streaming
IP TV is becoming order of the day and finally I decided to start work over it. For complete multicast streaming testing, I configured a multicast server which can send the stream to whole cloud. For receiving the stream I used SSM as well as BSR mode. Still I need to test lot of thing and within few days I will publish the documents.
regards
shivlu jain
Click Here To Read Rest Of The Post...
Wednesday, April 8, 2009
Implementation Of Autp-RP and BSR in Service Provider Network
From the last few days I am receiving lot of queries of implemetation of AutoRp and BSR in the core of service provider. The implemetation is really so easy but for that basic understatnding of multicast is required. I have already prepared the two low level documents of multicast which can help to implement it in a very easy manner with no errors.
1. Implementation of Auto-RP in service provider network.
2. Implementation of BSR in service provider network.
regards
shivlu jain
Click Here To Read Rest Of The Post...
Friday, March 27, 2009
Sparse Mode Made Clients Down
Yesterday during multicast testing we enabled pim on lan interface of cisco router as well as on serial interfaces. In the lab end to end customers were working over l2tpv3 and tunnel was establishing successfully. The moment pim sparse mode was enabled on lan interface end to end customer was not able to reach. But if the customer was conifgured as layer vpn it worked fine end to end. In layer 2 circuit l2tunnel was up but no data flow works on it. As soon as pim sparse mode was disabled from lan interface data flow started on l2 tunnel. Issues faced only with l2tpv3 protocol after enabling pim sparse mode on lan interface.
Cisco IOS used during testing:- c1841-spservicesk9-mz.123-14.YT1.bin
regards
shivlu jain
Click Here To Read Rest Of The Post...
Thursday, March 26, 2009
MTI(Multicast Tunnel Interface) is Coming Up But Not Pim Neighborship
Yesterday I got a query on MVPN:-
In MVPN MTI(Multicast Tunnel Interface) is coming up but but end to end pim neighborship on MTI tunnel is not coming up. What is reason for this and how this can be possible?
The question is very intectually. Actually the formation of MTI depends on the default MDT group which is being mentioned under vrf. So once the reachability of that group available in the MPLSVPN cloud MTI tunnel comes up. But PIM neighborship depends on the pim sparse-dense-mode or sparse-mode and if it is not coming up definately in the path pim sparse-dense mode or sparse mode is missing.
regards
shivlu jain
Click Here To Read Rest Of The Post...
Wednesday, March 25, 2009
Pim Vrf Neighborship Not Coming Up In SSM
If the mplsvpn backbone is running over dense mode and serving mvpn services and designers want to migrate the dense mode to ssm. But the mplsvpn core is using type 2 rd which is actually reserve for inter-as mvpn and some of the core routers are running over SB or SRC series ios. During the migration of core; one should not face any type of issue but of mvpwhen migration n customers will start you might face a problem of pim neighborship of vrf not coming up but mdt tunnels up. The main reason for this is that mdt-safi which I have already covered in my previous post. In these cases, ipv4 mdt need to be activated with RR. As soon as it will be configured you will be glad to see mvpn vrf pim neighborships come up.
regards
shivlu jain
Click Here To Read Rest Of The Post...
Tuesday, February 17, 2009
Implementation of SSM
Yesterday I have tested SSM in service provider cloud. In my previous post I have already described the pros and cons of SSM. Opening gambit, how to implement SSM in SP cloud. The configuration nothing only you should know the concept behind the configuration. I have tested the scenario with the help of VLC player; really a useful tool to generate multicast stream. One thing keep in mind if your ldp breaks in the SP cloud MVPN will not going to break because MVPN doesnot work on LDP.
A 239.232.0.0(How to select multicast group) series is used for the default and data mdt(Basics of MVPN). In the first phase I implemented the solution with default mdt and checked the stream. It was flowing across all the neighbors. There after I used the data mdt see its convergence. With the help of show ip mroute vrf
Commands used for SSM
1. All interfaces should be PIM enabled.
ip pim sparse-mode
2. Loopback which is used for MP-iBGP should be pim enabled.
Never use two loopbacks for MP-BGP.
3. Create acl which defines the mdt groups
ip access-list standard 1
permit 239.232.0.0
4. Bind the acl with SSM
ip pim ssm range 1
Note:- Donot use ip pim ssm default because it will use 232.0.0.0 group in this case the stream will stop forwarding.
5. SPT Threshold is not going to work with SSM
Data MDT will be the keen player of SSM. Because no more *,G entries only you will find S,G.
regards
shivlu jain
Click Here To Read Rest Of The Post...
Sunday, February 8, 2009
Multicast VPN FAQ
From the last few days a discussion is going on MVPN among me, Chintan Shah (Colt Technologies) & Harold Ritter(Cisco). Consequence lot of hidden concepts come out. So I finally made the faq so that it can be used by others as reference.
Would like to thank hritter for sharing his great experience to us.
MVPN Discussion & FAQ
Q:- Data and Default MDt are based on which draft?
A:- For the data MDT, the method to signal the source address is described in draft-rosen-vpn-mcast section 7.2, which is supported by both IOS and JUNOS.
http://www.potaroo.net/ietf/idref/draft-rosen-vpn-mcast/#page-19
For the default MDT, the signaling in IOS is done using draft-nalawade-idr-mdt-safi, which is not supported in JUNOS.
http://tools.ietf.org/html/draft-nalawade-idr-mdt-safi-03
Q:- Does MVPN require Sparse Mode Or SSM?
A:- MVPN can be implemented with the both. But in multivendor enviorment like juniper SSM only supports data mdt not default mdt. For implementing default mdt one need to deply anycast rp.
Q:- Does SP need to configure all routers for MSDP?
A:- It depends as per the requirement. If SP is having more traffic in doenstream then those P can be used for MSDP peering. So the answer is no if you are having 10 P routers then out of 10 2 or 2 or 10 can be used for MSDP peering.
Q:- How to announce RP in case of using Anycast RP?
A:- If SP deploys anycast RP address in the core then static RP is the best option. Another option to use the dynaic RP like auto rp or bsr.
Q:- Which type of entries created in SSM & in Anycast RP?
A:- In SSM only S,G entrie is created. In Anycast *,G * S,G entries creared.
Q:- How to use "Ip pim spt threshold infinity" in SP domain?
A:- By default cisco IOs set threshold value to 0. Ip pim spt threshold infinity can be used only with ASM becasue it supports *,G while SSM supports only S,G so it cannot be used with this.
Q:- Does Juniper support auto-rp?
A:- Yes, it is supported by juniper.
http://www.juniper.net/techpubs/software/junos/junos91/swconfig-multicast/configuring-auto-rp.html
Q:- How to provide the redundancy in case of Anycast RP & SSM?
A:- In case of anycast RP if any of the RP fails then the other RP will take care from the RP set. In RP set all the routers configured with the same ip address. In case of SSM no rp is required.
Q:- Cisco IOS MDT SAFI implementation is based on which draft?
A:- The IOS MDT SAFI implementation is based on the following draft.
http://tools.ietf.org/html/draft-nalawade-idr-mdt-safi-03
Q:- Advantage of SSM Vs SM
A:- RP infrastructire is not required in case of SSM but in SM it is mandatory.
Q:- Does P routers participate in maintaining the states?
A:- No, only PE will be used. Core will be free from the states.
Q:- Difference between SSM Vs SM in case of update,register messages?
A:- SSM uses PIM-SM with a few modifications. RFC4601 section 4.8.1 defines the modifications to the PIM SM protocol to support SSM. Beyond these modifications, all normal PIM SM functionality and messages are required, including periodic join messages.
http://tools.ietf.org/html/rfc4601
Q:- Can SP use Bi-Dir in core?
A:- Yes, if SP doesnot want to create S,G entries. Bi-dir is used only if SP is having very large number of VPNs.
Q:- Does Cisco/Juniper support bi-dir?
A:- Bi-dir is suported by cisco for all the platforms but juniper doesnot support.
Draft:- http://www.juniper.net/solutions/literature/white_papers/200291.pdf
Q:- Does Anycast RP require MSDP?
A:- RFC4610 allows you to run Anycast RP without MSDP by having the RP receiving the register message to replicate this message to the other RP(s) in the RP set. Section 3 of RFC4610 explains this mechanism in details.
http://www.ietf.or/rfc/rfc4610.txt?number=4610
Q:- Does cisco support Anycast with MSDP?
A:- Yes
regards
shivlu jain
Click Here To Read Rest Of The Post...
Monday, February 2, 2009
Upgradation of RR to MDT SAFI
How to upgrade the core router to MDT SAFI
To upgrade the core routers to mdt safi is one of the biggest challenge in service provider. Assuming SP is having two RR and every PE is having peering with them. A test bed is created with the given scenario given which is explicitly showing with some test cases and the outputs.
Basic Scenario
PE1, PE2, RR1 & RR2 are cisco 7200 with IOS 12.4 15T1
We have created a test vrf with default mdt for group 239.1.1.1. End to end multicast tunnel established.

Figure 1
Test Bed 1
In test bed one we upgraded the ios of PE1 to cisco 12.2 (31)SB13 series which supports the mdt features. But the route reflectors are still using the non standard mdt features. But we did not faced any issue after up gradation and route reflectors are receiving the mdt values from PE1 with extended community 2:65500:1.
Test Bed 2
In the second test bed we upgraded the ios of RR1 from 12.4 15 T1 to 12.2 (31)SB 13 series. After the boot up process completed we checked the mdt bgp values but did not find anything. So under bgp address family ipv4 mdt we activated the neighbourship of PE1. After that we checked the same on RR1 but did not find anything. Corresponding on RR2 we are receiving the values with extended community 2:65500:1 from PE2 not from PE1. There after we activated the neighbourship of ipv4 mdt on PE1 towards RR1. As soon as it get activated on RR2 was able to receive mdt bgp routes with no extended community. But still on RR2 routes are coming from PE2 only not from PE1. Then we activated the neighbourship of ipv4 mdt for RR2. After that we received the updates from PE1 to RR2 with extended community 2:65500:1. But RR1 is not forwarding the mdt safi updates to PE2. PE2 is only receiving the updates with extended 2 community from RR2. For this we need to activate the mdt for PE2. As soon as it is activated, PE2 is able to receive the routes from the both RRs.
Results:- If the PE is using mdt safi and route reflectors are using mdt safi & pre mdt safi in this case on PE you need to activate the ipv4 mdt for both the route reflectors so that PE can send the updates to RR1 with mdt safi and for RR2 it sends the update with extended 2:65500:1 community. In short we can upgraded RR1 is backward compatabile to the PE with respect to the mdt. Only one thing which we need to take is that to enable mdt safi for the non mdt PE.
Test 3
In this test bed we upgraded the ios of PE2 to 12.2 (31) SB13. After the boot up process we checked the values of mdt bgp but did not find anything. Then we activated the ipv4 mdt neighbourship of RR2 which is running on 12.4. 15 T1 ios. As soon as it came up PE2 is able to receive the updates from RR2. Ther after neighbourship of RR1 was activated and PE2 is able to receive the routes from RR1 also. The routes received by PE2 are without standard extended 2 community.
Result:- 12.4 15 T1 code was providing the backward compatibility with both. But if the RR is upgraded with 12.2 (31) 13SB series then it can send and receive the updates only to mdt group members not to non mdt group members.
If we are going to upgrade the ios of one route reflector and second will be running on non mdt safi code. In this type of scenario core routers will be getting the mvpn routes only from the non mdt safi RR. You cannot provide the redundancy. So the best is that first upgrade the PE routers there after upgrade the route reflectors.
regards
shivlu jain
Click Here To Read Rest Of The Post...
Wednesday, January 28, 2009
Cisco and Juniper SSM Deployment with Anycast RP
From one of the discussion hritter from cisco cleared a good point regarding the deployment of ayncast rp and ssm with cisco and juniper. If you are having juniper boxes as well as cisco boxes and you are planning to deploy SSM solution in the core then it won't work with juniper because juniper IOS only supports data mdt not default mdt. But for basic deploment of MVPN data mdt is mandatory. So if the juniper IOS is not going to support then you require some protocol which can help to converge your mdt tunnel. For that you have to deploy anycast rp which will be responsible for the convergence of default mdt and then you can deploy SSM for your data mdt.
Really a awesome solution.
regards
shivlu jain
Click Here To Read Rest Of The Post...
Thursday, January 15, 2009
Cisco was not using Mdt-Safi as per standard
One must need to understand the concept behind the mdt-safi. In cisco router to distinguish the mvpn routes it append type 2 rd value which is not legal as per RFC 4364 section 4.2. It explicitly shows that type 2 rd value is used by service provider to encapsulate as number. But cisco routers are using type 2 for multicast vpn routes. Now the problem which I have seen during the integration of cisco & another vendor in which if cisco router sends the mvpn route with type 2 then other router is not able to understand the route and may behave abnormally. So during the integration of cisco with another vendor where mvpn is required try to check the mdt-safi or use the cisco ios 12.2 sb series which are not using the type 2 values for mvpn routes.
You can see the various rd values given below:-
0x0000 |As Number| Local
0x0001 | Ip Address | Local
0x0002 | As Number | Local
regards
shivlu jain
Click Here To Read Rest Of The Post...
Saturday, January 3, 2009
Design Considerations: How to select multicast group address
Multicast address always fall in 224 to 239. It means first four bits will be reserved for multicast address. Consequence 28 bits left behind. For the conversation of multicast ip address to mac-address 0100.5e is reserved. So we are left with 24 bits and lost 4 bits during the conversation from ip to mac. 1 bit is used by some other purpose that was being purchased by some. I don’t think so it is really story but a hear sound only. So we can say are only left with 23 bits. 5 bits are not available during the copy of multicast address to hardware or mac-address. 32 multicast IP addresses that map to the MAC address 0x0100.5e01.1020. That’s during the campus design of multicast it is always said that the overlapping of address should kept in mind.
Do not use x.0.0.x or x.128.0.x group addresses
Multicast addresses in the 224.0.0.x range are considered link local multicast addresses. They are used for protocol discovery and are flooded to every port. For example, OSPF uses 224.0.0.5 and 224.0.0.6 for neighbor and DR discovery. These addresses are reserved and will not be constrained by IGMP snooping. Do not use these addresses for an application. Further, since there is a 32:1 overlap of IP Multicast addresses to Ethernet MAC addresses as already explained, any multicast address in the [224-239].0.0.x and [224-239].128.0.x ranges should NOT be considered
regards
shivlu jain
Click Here To Read Rest Of The Post...
Friday, January 2, 2009
Design Considerations For MVPN
Design Considerations For MVPN
When deploying a multicast VPN service, providers try to optimize multicast traffic distribution and delays while reducing the amount of state. The following considerations have given MVPN providers direction in their MVPN deployment:
a) Core multicast routing states should typically be kept to a minimum.
b) MVPN packet delays should typically be the same as unicast traffic.
c) Data should typically be sent only to PEs with interested receivers.
e) Number of multicast routing states.
f) Overhead of managing the RP if PIM-SM is used
g) Difference of forwarding delay between shared tree and source trees which are very important.
regards
shivlu jain
Click Here To Read Rest Of The Post...
Wednesday, December 31, 2008
Cisco IOS Multicast History
10 years of Cisco IOS Multicast History
1994 Pimv1,SM/DM,IGMPv1/2,DVMRP
1995 Fast Switching,SAP/SDR,PIM,IGMP,Cisco IP Mroute,Mtrace,NBMA Mode
1996 AuroRP,CGMP,CMF
1997 MDFS,RFC 2337 ATM MPS
1998 PIMv2,BSR,MBGP,MSDP
1999 MMLS,Tunnel UDLR,Multicast NAT,Multicast Tag Switching
2000 SSM,PIM Bi-Dir,MSDP MIB,Heart Beat,IGMP Snooping,IGMP Mproxy Route
2001 Cisco Pim Traps,MSDP SA limits,
2002 MVPN/VRF Lite,
2003 IPv6 Multicast,Netflow v9 Multicast,PIM Snooping
2004 RPF Vector,Inter-As MVPN,MVPN MIBS,SSM Filtering,IPV6 Multicast New Features
regards
shivlu jain
Click Here To Read Rest Of The Post...
Subscribe to:
Posts (Atom)