Monday, April 23, 2018

Deep Buffers In Data Center Switching

Buffering is a concept which has been around in networking since long. When we want to adjust the extra traffic without dropping it is known as buffering. The amount of holding the extra traffic varies from few micro seconds to seconds. More the holding time is deep the buffer is needed. The deep buffer means the extra traffic will be there in queue for few seconds and once the burst get cleared it will be served. The only advantage of deep buffer is to hold the peak burst for fraction of second but with this advantage we are getting one disadvantage also which is increase in latency.

The more we hold the data traffic the more latency will be. If this is the case how OEMs are claiming that deep buffer switches will help you to build low latency data centers. As per my understanding, deep buffers will help to build high latency data centers.

Deep buffers are only required if there is mismatch of transmitting and receiving interface speeds to hold the extra burst for few seconds. But if we are building a low latency data center with zero over subscription rate in that case there is no use of having deep buffers switches. Critical applications can’t show the performance with deep buffers because they always face latency due to the deep buffer delays. So it is important to understand the traffic flow while deciding will deep buffers really help for serving low latency applications or not.
Click Here To Read Rest Of The Post...

Building Low Latency Data Center Switching Fabric

What is Switching Fabric?
A data center fabric is a system of switches and servers and the interconnections between them that can be represented as a fabric. Because of the tightly woven connections between nodes (all devices in a fabric are referred to as nodes), data center fabrics are often perceived as complex, but actually it is the very tightness of the weave that makes the technology inherently elegant. A data center fabric allows for a flattened architecture in which any server node can connect to any other server node, and any switch node can connect to any server node. This flattened architecture of fabrics is key to their agility.

What are the trends of Switching Fabric?
In earlier days, Data Center Architecture was of 3 Tier architecture running spanning tree or layer 3 routing across the switches. The biggest problem was with these architecture was that only single path is selected and rest of the bandwidth got wasted across the network. All data traffic takes that best path as per the routing table until the point that it gets congested then packets are dropped. This fabric was not enough to handle the existing traffic data growths, with predictability and shift was required.

Clos networks made existing complex topology made simple by giving name SPINE and LEAF in modern data center switching topologies. Data center networks are comprised of top-of-rack switches and core switches. The top of rack (ToR) switches are the leaf switches and they are attached to the core switches which represent the spine. The leaf switches are not connected to each other and spine switches only connect to the leaf switches (or an upstream core device). In this Spine-Leaf architecture, the number of uplinks from the leaf switch equals the number of spine switches. Similarly, the number of downlinks from the spike equal the number of leaf switches. The total number of connections is the number of leaf switches multiplied by the number of spine switches. If you have 4 spines and 8 leafs in that case you need to have 4 x 8 = 32 Connections.

How Latency Is Getting Improved By Changing Data Center Switching?
All of us aware that layer 2 switches are usually responsible for transporting data on the data link layer and performs error checking on each transmitted and received frame. The old generation or we can say the earlier used switches in the data center perform store and forwarding switching. In store and forwarding switching, the entire switch has to be received first and after that it is being forwarded. The switch stores the entire frame and does the CRC calculations before it forwards. If no CRC errors are present in that case switch forwards the frame else drop it.

In case of CUT Through Switching, when the switch receive the frame it looks the first 6 bytes of the frame, then the switch checks the destination mac address , outgoing interface and forwards the frames. The all type of error calculations are done by the receiving device as contract to transmitting device in case of store and forward switching.

Improve Latency By Converting NIC to Smart NIC
Traditionally, TCP/IP protocol processing has been performed in software by the end system’s CPU. With the high packet load CPU also get busy in processing and unnecessarily increases the host latency. This is the latency which is being incurred by the host and not visible to anyone as no one cares about it. But with the help of Smart NIC we can offload the protocol and network process on NIC and is also known as Intelligent Server Adapter. This has been widely used in cloud data center servers to boost the performance by offloading CPU in NICs. Traditional NICs only support check sum and segmentation but if we have to offload the entire complex server based networking data plane which includes the SDN tunnel termination starting and ending point. The smart NIC has to be open and programmable; if it is not the case, it will become fixed and difficult to control and program by the SDN controller. Initially the packets and flows are handled by the host but as soon as the flow get detected it will be offload to Smart NIC. Typically, a SmartNIC includes larger memory on-chip or on the SmartNIC board to hold a much larger number of flows.

Summary
From the above comparison, we can conclude that in case of Building Low Latency Datacenters, switching latency is one of the considered parameters but in today’s world switch latency difference between store-and forward and cut-through switching is negligible.

Clos Latency can’t be negligible because it give the predictability and help to utilize all the available paths as compared to three tier architecture. Apart from these network latency has to be considered.

Intelligent Ethernet NICs offload protocol processing from the application CPU thereby eliminating software performance bottlenecks, minimizing CPU utilization, and greatly reducing the host component of end-to-end latency.

Click Here To Read Rest Of The Post...

Saturday, April 21, 2018

Using Salt with Network Devices - Part 2


In Part-1 we learned about salt basics and its installation. In this part we will focus on the working of salt and also talk about proxy-minion for Juniper devices.
To start let’s begin with defining the master configuration on the master01 host.
Please use editor of your choice (like vim or nano) to edit file /etc/salt/master and add following two entries
       
root@master01:~# cat /etc/salt/master
interface: 0.0.0.0
auto_accept: True

The interface with all zeros means that the master will listen for minion on all available and active interfaces. It is obvious, that it is possible to restrict the master to minion communication on a specific interface also by defining the IP address of that that specific interface.
As explained in the Part-1, the master and minion communication are secured and they exchange keys. The entry “auto_accept: True” will accept the keys from minion(s) as and when they are started since this is a controlled and demo environment. In practice we keep it as “False” so that we accept the minion’s key manually and no unauthorized minion can connect to the master On the minion we also have two entries in the /etc/salt/minion file which are as below
       
root@minion01:~# cat /etc/salt/minion
master: 192.168.122.2
id: minion01

Master defines the IP address of the master and id is the unique identification of this minion.
Master start Debug messages. Notice the authentication request from minion01.
       
root@master01:~# salt-master -l debug
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Configuration file path: /etc/salt/master
[WARNING ] Insecure logging configuration detected! Sensitive data may be logged.
[INFO    ] Setting up the Salt Master
[INFO    ] Generating master keys: /etc/salt/pki/master
[INFO    ] Preparing the root key for local communication
[PROFILE ] Beginning pwd.getpwall() call in masterapi access_keys function
[PROFILE ] End pwd.getpwall() call in masterapi access_keys function
[DEBUG   ] Created pidfile: /var/run/salt-master.pid
[INFO    ] Starting up the Salt Master
[DEBUG   ] LazyLoaded roots.envs
[DEBUG   ] Could not LazyLoad roots.init: 'roots.init' is not available.
[INFO    ] salt-master is starting as user 'root'
[INFO    ] Current values for max open files soft/hard setting: 1024/1048576
[INFO    ] Raising max open files value to 100000
[INFO    ] New values for max open files soft/hard values: 100000/1048576
[INFO    ] Creating master process manager
[INFO    ] Creating master publisher process
[DEBUG   ] Started 'salt.transport.zeromq.._publish_daemon' with pid 18527
[INFO    ] Creating master event publisher process
[INFO    ] Starting the Salt Publisher on tcp://0.0.0.0:4505
[INFO    ] Starting the Salt Puller on ipc:///var/run/salt/master/publish_pull.ipc
[DEBUG   ] Started 'salt.utils.event.EventPublisher' with pid 18530
[INFO    ] Creating master maintenance process
[DEBUG   ] Started 'salt.master.Maintenance' with pid 18531
[INFO    ] Creating master request server process
[DEBUG   ] Started 'ReqServer' with pid 18532
[ERROR   ] Unable to load SSDP: asynchronous IO is not available.
[ERROR   ] You are using Python 2, please install "trollius" module to enable SSDP discovery.
[DEBUG   ] Process Manager starting!
[DEBUG   ] Started 'salt.transport.zeromq..zmq_device' with pid 18533
[DEBUG   ] Initializing new Schedule
[INFO    ] Setting up the master communication server

[INFO    ] Authentication request from minion01
[INFO    ] Authentication accepted from minion01
[DEBUG   ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Sending event: tag = salt/auth; data = {u'id': 'minion01', '_stamp': '2018-04-21T09:20:42.794175', u'result': True, u'pub': '-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAupxG1B1QBwxNXX4bhiyK\nN/WL5KRoMQFnwuNYGms1C1PcMthzQ/eCPZW91RQYwTuvPhfUr79lpRXz4DltGSei\nR4RBeGE/pk2g8obx9tQlBhChm3dzZk68S0DvCwnhH76ZKfR5XGuTFCwIH2Uh72/p\nmEET7cYuM8bKNx+nWWzeKhs/rYwuxcJAjwuQZZeccgsWXvS69VP30LVZHCqOM5ZA\n8SleJd8yRyZ6PvLOfQtthJasc7FmWoTqkyGNaPaZSWefe9/FNXreiAk+YXoXIZOC\nNRZQMURHG8L1jot7mUlhSxhjXaCOFCbOwaOhcwHtmUcMfbnQ9Sz0/xh1cFxxRMaH\nSQIDAQAB\n-----END PUBLIC KEY-----', u'act': u'accept'}
[DEBUG   ] Determining pillar cache
[DEBUG   ] LazyLoaded jinja.render
[DEBUG   ] LazyLoaded yaml.render
[DEBUG   ] LazyLoaded localfs.init_kwargs
[DEBUG   ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Sending event: tag = minion/refresh/minion01; data = {u'Minion data cache refresh': 'minion01', '_stamp': '2018-04-21T09:20:43.006560'}
[DEBUG   ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Sending event: tag = minion_start; data = {'_stamp': '2018-04-21T09:20:43.478571', 'pretag': None, 'cmd': '_minion_event', 'tag': 'minion_start', 'data': 'Minion minion01 started at Sat Apr 21 14:50:43 2018', 'id': 'minion01'}
[DEBUG   ] Sending event: tag = salt/minion/minion01/start; data = {'_stamp': '2018-04-21T09:20:43.510991', 'pretag': None, 'cmd': '_minion_event', 'tag': 'salt/minion/minion01/start', 'data': 'Minion minion01 started at Sat Apr 21 14:50:43 2018', 'id': 'minion01'}
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Guessing ID. The id can be explicitly set in /etc/salt/minion
[DEBUG   ] Found minion id from generate_minion_id(): master01
[DEBUG   ] Grains refresh requested. Refreshing grains.
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Please install 'virt-what' to improve results of the 'virtual' grain.
[DEBUG   ] LazyLoaded local_cache.clean_old_jobs
[DEBUG   ] LazyLoaded localfs.list_tokens
[DEBUG   ] Updating roots fileserver cache
[DEBUG   ] This salt-master instance has accepted 1 minion keys.

Similarly minion can be started by following command in debug mode
       
root@minion01:~# salt-minion - l debug

Now lets run some execution commands from master to the minion. Please note while executing the command we need to specify the minion name. We can also define regex like ‘*’ means all minions and ‘min*’ means all minions whose name starts with 'min'. Notice the use of single quotes (they are mandatory).
Let’s execute something from salt-master
       
root@master01:~# salt '*' test.ping
minion01:
    True
root@master01:~#

Now lets check the grains (the static information about minion – as explained in Part-1)
       
root@master01:~# salt 'minion01' grains.items
minion01:
    ----------
    SSDs:
    biosreleasedate:
        01/01/2011
    biosversion:
        0.5.1
    cpu_flags:
        - fpu
        - de
        - pse
        - tsc
        - msr
        - pae
        - mce
        - cx8
        - apic
        - sep
        - mtrr
        - pge
        - mca
        - cmov
        - pse36
        - clflush
        - mmx
        - fxsr
        - sse
        - sse2
        - syscall
        - nx
        - lm
        - rep_good
        - nopl
        - pni
        - cx16
        - hypervisor
        - lahf_lm
        - kaiser
    cpu_model:
        QEMU Virtual CPU version 1.5.3
    cpuarch:
        x86_64
    disks:
        - sda
        - sr0
        - loop0
        - loop1
        - loop2
        - loop3
        - loop4
        - loop5
        - loop6
        - loop7
    dns:
        ----------
        domain:
        ip4_nameservers:
            - 192.168.122.1
            - 10.233.6.81
        ip6_nameservers:
        nameservers:
            - 192.168.122.1
            - 10.233.6.81
        options:
        search:
        sortlist:
    domain:
    fc_wwn:
    fqdn:
        minion01
    fqdn_ip4:
    fqdn_ip6:
    fqdns:
    gid:
        0
    gpus:
        |_
          ----------
          model:
              GD 5446
          vendor:
              unknown
    groupname:
        root
    host:
        minion01
    hwaddr_interfaces:
        ----------
        ens3:
            52:54:00:00:08:01
        ens4:
            52:54:00:00:08:03
        lo:
            00:00:00:00:00:00
    id:
        minion01
    init:
        systemd
    ip4_gw:
        40.1.1.2
    ip4_interfaces:
        ----------
        ens3:
            - 192.168.122.3
        ens4:
            - 40.1.1.17
        lo:
            - 127.0.0.1
    ip6_gw:
        False
    ip6_interfaces:
        ----------
        ens3:
            - fe80::5054:ff:fe00:801
        ens4:
            - fe80::5054:ff:fe00:803
        lo:
            - ::1
    ip_gw:
        True
    ip_interfaces:
        ----------
        ens3:
            - 192.168.122.3
            - fe80::5054:ff:fe00:801
        ens4:
            - 40.1.1.17
            - fe80::5054:ff:fe00:803
        lo:
            - 127.0.0.1
            - ::1
    ipv4:
        - 40.1.1.17
        - 127.0.0.1
        - 192.168.122.3
    ipv6:
        - ::1
        - fe80::5054:ff:fe00:801
        - fe80::5054:ff:fe00:803
    iscsi_iqn:
        - iqn.1993-08.org.debian:01:2bee19278ac0
    kernel:
        Linux
    kernelrelease:
        4.4.0-112-generic
    kernelversion:
        #135-Ubuntu SMP Fri Jan 19 11:48:36 UTC 2018
    locale_info:
        ----------
        defaultencoding:
            UTF-8
        defaultlanguage:
            en_US
        detectedencoding:
            UTF-8
    localhost:
        minion01
    lsb_distrib_codename:
        xenial
    lsb_distrib_description:
        Ubuntu 16.04.3 LTS
    lsb_distrib_id:
        Ubuntu
    lsb_distrib_release:
        16.04
    machine_id:
        fb07e936a29d43748b5f9090ec7e9cd3
    manufacturer:
        Red Hat
    master:
        192.168.122.2
    mdadm:
    mem_total:
        2000
    nodename:
        minion01
    num_cpus:
        2
    num_gpus:
        1
    os:
        Ubuntu
    os_family:
        Debian
    osarch:
        amd64
    oscodename:
        xenial
    osfinger:
        Ubuntu-16.04
    osfullname:
        Ubuntu
    osmajorrelease:
        16
    osrelease:
        16.04
    osrelease_info:
        - 16
        - 4
    path:
        /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
    pid:
        17372
    productname:
        KVM
    ps:
        ps -efHww
    pythonexecutable:
        /usr/bin/python
    pythonpath:
        - /usr/local/bin
        - /usr/lib/python2.7
        - /usr/lib/python2.7/plat-x86_64-linux-gnu
        - /usr/lib/python2.7/lib-tk
        - /usr/lib/python2.7/lib-old
        - /usr/lib/python2.7/lib-dynload
        - /usr/local/lib/python2.7/dist-packages
        - /usr/lib/python2.7/dist-packages
    pythonversion:
        - 2
        - 7
        - 12
        - final
        - 0
    saltpath:
        /usr/local/lib/python2.7/dist-packages/salt
    saltversion:
        2017.7.0-693-ga5f96e6
    saltversioninfo:
        - 2017
        - 7
        - 0
        - 0
    serialnumber:

    server_id:
        1310197239
    shell:
        /bin/bash
    swap_total:
        0
    systemd:
        ----------
        features:
            +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN
        version:
            229
    uid:
        0
    username:
        root
    uuid:
        fb07e936-a29d-4374-8b5f-9090ec7e9cd3
    virtual:
        kvm
    zfs_support:
        False
    zmqversion:
        4.1.6
root@master01:~#

As you can see we have a huge list of things collected. Let’s run some command remotely
       
root@master01:~# salt 'minion01' cmd.run 'lsb_release -a'
minion01:
    No LSB modules are available.
    Distributor ID:	Ubuntu
    Description:	Ubuntu 16.04.3 LTS
    Release:	16.04
    Codename:	xenial
root@master01:~#

Salt also maintains a file server to distribute files from master to the minion. For security reasons, the minions cant have access to all the files on the master, instead of that we define one specific folder on the master configuration which minion can have access to. We can copy files from master to minion or vice-versa in this folder only.
The salt master config files now looks like this
       
root@master01:~# cat /etc/salt/master
interface: 0.0.0.0
auto_accept: True
file_roots:
 base:
   - /opt/test_folder
root@master01:~#

We have now defined the file_roots in the master config files which means we can now transfer contents of folder /opt/test_folder/ from master to minion or vice-versa. Let’s see how it is done
       
root@master01:~# salt 'minion01' cp.get_file 'salt://salt-testfile.txt' '/opt/test_folder/'
minion01:
    /opt/test_folder/salt-testfile.txt
root@master01:~#

Lets check on the minion
       
root@minion01:~# ll /opt/test_folder/
total 12
drwxr-xr-x 2 root root 4096 Apr 21 15:47 ./
drwxr-xr-x 3 root root 4096 Apr 21 15:37 ../
-rw-r--r-- 1 root root   47 Apr 21 15:47 salt-testfile.txt
root@minion01:~#

Working with Junos proxy:

Proxy-minion is a very important feature that enables controlling devices that can’t run a standard salt-minion. As mentioned in the Part-1 the same minion will be acting as the proxy-minion for Junos devices in between the master and the network devices.
Junos proxy provides the necessary plumbing that allows device discovery, control, status, remote execution etc on the Juniper routers and switches.
Please note that for every Junos device we need a different proxy process. We can have multiple proxy-minion process running on the same minion device like the way in our example here.
Before we begin, since we need to talk to Juniper devices now, we need to install three more python libraries on the master and the minions. These libraries are:
1) junos-eznc: The Juniper PyEz library.
2) jxmlease: a Python module for converting XML to intelligent Python data structures, and converting Python data structures to XML.
3) yamlordereddictloader: module providing a loader and a dumper for PyYAML allowing to keep items order when loading a file.
       
root@master01:~# pip list | grep eznc
junos-eznc (2.1.7)
root@master01:~#

root@master01:~# pip list | grep ease
jxmlease (1.0.1)
root@master01:~#

root@master01:~# pip list | grep yaml
yamlordereddictloader (0.4.0)
root@master01:~#

We will use Juniper virtual QFX as our network devices. This example will work in exactly the same way across all Junos based devices, right from smallest EX2200-C till the biggest PTX10K.
Following is the topology for the virtual Network. This is example of a small Data Center with a typical spine and leaf architecture.
Please ensure that netconf (over ssh) is enabled on the Juniper devices. Showing example of one spine and one leaf along with the Junos version
       
lab@spine01> show configuration system services
ssh {
    root-login allow;
}
netconf {
    ssh;
}

{master:0}
lab@spine01> show version brief
fpc0:
--------------------------------------------------------------------------
Hostname: spine01
Model: vqfx-10000
Junos: 17.4R1.16 limited
JUNOS Base OS boot [17.4R1.16]
JUNOS Base OS Software Suite [17.4R1.16]
JUNOS Crypto Software Suite [17.4R1.16]
JUNOS Online Documentation [17.4R1.16]
JUNOS Kernel Software Suite [17.4R1.16]
JUNOS Packet Forwarding Engine Support (qfx-10-f) [17.4R1.16]
JUNOS Routing Software Suite [17.4R1.16]
JUNOS jsd [i386-17.4R1.16-jet-1]
JUNOS SDN Software Suite [17.4R1.16]
JUNOS Enterprise Software Suite [17.4R1.16]
JUNOS Web Management [17.4R1.16]
JUNOS py-base-i386 [17.4R1.16]
JUNOS py-extensions-i386 [17.4R1.16]

**
lab@leaf02> show configuration system services
ssh {
    root-login allow;
}
netconf {
    ssh;
}

{master:0}
lab@leaf02> show version brief
fpc0:
--------------------------------------------------------------------------
Hostname: leaf02
Model: vqfx-10000
Junos: 17.4R1.16 limited
JUNOS Base OS boot [17.4R1.16]
JUNOS Base OS Software Suite [17.4R1.16]
JUNOS Crypto Software Suite [17.4R1.16]
JUNOS Online Documentation [17.4R1.16]
JUNOS Kernel Software Suite [17.4R1.16]
JUNOS Packet Forwarding Engine Support (qfx-10-f) [17.4R1.16]
JUNOS Routing Software Suite [17.4R1.16]
JUNOS jsd [i386-17.4R1.16-jet-1]
JUNOS SDN Software Suite [17.4R1.16]
JUNOS Enterprise Software Suite [17.4R1.16]
JUNOS Web Management [17.4R1.16]
JUNOS py-base-i386 [17.4R1.16]
JUNOS py-extensions-i386 [17.4R1.16]

{master:0}
lab@leaf02>

For master to run commands on Junos devices, we need to define following files in /srv/pillar/ folder
1) Pillar file for each Junos device
2) Top file for all the pillar files

Pillars are user defined variables which are distributed among minions. Pillars are useful for

1) High sensitive data
2) Minion Configuration
3) Variables
4) Arbitrary data

Note: The default location for the pillar files are /srv/pillar, however it can be changed in the master configuration file under 'pillar_roots' parameter

The top file is used to map what SLS modules get loaded onto what minions via the state system. We will be able to understand this in more detail in the example
The master config will not be changed, however since the minion will now be acting as proxy-minion we need to define the proxy configuration file in the minion system. This file will be called as proxy and will be defined under /etc/salt/ folder
       
root@minion01:~# ll /etc/salt/
total 24
drwxr-xr-x  4 root root 4096 Apr 21 18:28 ./
drwxr-xr-x 94 root root 4096 Apr 21 18:21 ../
-rw-r--r--  1 root root   35 Apr 21 14:00 minion
drwxr-xr-x  2 root root 4096 Apr 21 14:49 minion.d/
drwxr-xr-x  3 root root 4096 Apr 21 14:49 pki/
-rw-r--r--  1 root root   22 Apr 21 18:28 proxy
root@minion01:~#
root@minion01:~# cat /etc/salt/proxy
master: 192.168.122.2

For now that is all required to be done on the minion system.
On the master system, let’s see the various files present in the /srv/pillar folder
       
root@master01:~# ll /srv/pillar/
total 32
drwxr-xr-x 2 root root 4096 Apr 21 18:24 ./
drwxr-xr-x 3 root root 4096 Apr 21 18:17 ../
-rw-r--r-- 1 root root   76 Apr 21 18:24 leaf01.sls
-rw-r--r-- 1 root root   76 Apr 21 18:24 leaf02.sls
-rw-r--r-- 1 root root   76 Apr 21 18:24 leaf03.sls
-rw-r--r-- 1 root root   77 Apr 21 18:20 spine01.sls
-rw-r--r-- 1 root root   77 Apr 21 18:23 spine02.sls
-rw-r--r-- 1 root root  140 Apr 21 18:19 top.sls
root@master01:~#

The content of one of the pillar files. Please note that in the host field, we can also provide the IP address of the Junos device
       
root@master01:~# cat /srv/pillar/leaf01.sls
proxy:
  proxytype: junos
  host: leaf01
  username: lab
  password: q1w2e3

The contents of top file
       
root@master01:~# cat /srv/pillar/top.sls
base:
  'spine01':
     - spine01
  'spine02':
     - spine02
  'leaf01':
     - leaf01
  'leaf02':
     - leaf02
  'leaf03':
     - leaf03
root@master01:~#

The above top file can be read as 'the category ‘base’ has minion ‘spine01’ for which data is stored in spine01 file'. Please note it is not required to have .sls extension to be defined here.
Once again, it is interesting to note that all configuration is being done on the master system. Lets start the master process, the minion and proxy-minion prosses. The ‘-d’ represents that the process will be starting in the daemon mode.
       
root@master01:~# salt-master -d

On minion we do the following. Note I need to start a proxy-minion process for each of the device I need to manage. It is also important to note that each proxy-minion process will consume about 50MB of the RAM of the system, so please ensure you have enough memory available on the minion
       
root@minion01:~# salt-proxy --proxyid=spine01 -d
root@minion01:~# salt-proxy --proxyid=spine02 -d
root@minion01:~# salt-proxy --proxyid=leaf01 -d
root@minion01:~# salt-proxy --proxyid=leaf02 -d
root@minion01:~# salt-proxy --proxyid=leaf03 -d

root@minion01:~# ps aux | grep salt
root     18053  5.5  4.3 1562028 89256 ?       Sl   18:40   0:03 /usr/bin/python /usr/local/bin/salt-proxy --proxyid=spine01 -d
root     18147  4.7  4.0 1562288 83024 ?       Sl   18:40   0:02 /usr/bin/python /usr/local/bin/salt-proxy --proxyid=spine02 -d
root     18399  6.2  4.0 1562024 82924 ?       Sl   18:40   0:02 /usr/bin/python /usr/local/bin/salt-proxy --proxyid=leaf01 -d
root     18479  7.0  4.0 1562024 82692 ?       Sl   18:40   0:02 /usr/bin/python /usr/local/bin/salt-proxy --proxyid=leaf02 -d
root     18572  8.1  4.0 1562028 82812 ?       Sl   18:40   0:02 /usr/bin/python /usr/local/bin/salt-proxy --proxyid=leaf03 -d
root     18921  5.0  2.5 832988 52716 ?        Sl   18:41   0:01 /usr/bin/python /usr/local/bin/salt-minion -d
root     18922  0.0  1.6 291388 34704 ?        S    18:41   0:00 /usr/bin/python /usr/local/bin/salt-minion -d
root     18995  0.0  0.0  12944   972 pts/0    S+   18:41   0:00 grep --color=auto salt
root@minion01:~#

As mentioned earlier, the master and the minion is secured and they exchange keys. We can check which all keys the master has accepted.
       
root@master01:~# salt-key -L
Accepted Keys:
leaf01
leaf02
leaf03
minion01
spine01
spine02
Denied Keys:
Unaccepted Keys:
Rejected Keys:

Now we have stared the master and the proxy-minion, we can check the pillars which were loaded and the grains for the Junos devices
       
root@master01:~# salt '*' pillar.items
spine02:
    ----------
    proxy:
        ----------
        host:
            spine02
        password:
            q1w2e3
        proxytype:
            junos
        username:
            lab
leaf03:
    ----------
    proxy:
        ----------
        host:
            leaf03
        password:
            q1w2e3
        proxytype:
            junos
        username:
            lab
spine01:
    ----------
    proxy:
        ----------
        host:
            spine01
        password:
            q1w2e3
        proxytype:
            junos
        username:
            lab
leaf01:
    ----------
    proxy:
        ----------
        host:
            leaf01
        password:
            q1w2e3
        proxytype:
            junos
        username:
            lab
leaf02:
    ----------
    proxy:
        ----------
        host:
            leaf02
        password:
            q1w2e3
        proxytype:
            junos
        username:
            lab
root@master01:~#

root@master01:~# salt '*' test.ping
leaf03:
    True
leaf01:
    True
spine01:
    True
spine02:
    True
leaf02:
    True
root@master01:~#

Now let’s run some Junos specific commands
       
root@master01:~# salt 'spine01' 'junos.facts'
spine01:
    ----------
    facts:
        ----------
        2RE:
            False
        HOME:
            /var/home/lab
        RE0:
            ----------
            last_reboot_reason:
                Router rebooted after a normal shutdown.
            mastership_state:
                master
            model:
                QFX Routing Engine
            status:
                Absent
            up_time:
                22 hours, 25 minutes, 50 seconds
        RE1:
            None
        RE_hw_mi:
            False
        current_re:
            - master
            - node
            - fwdd
            - member
            - pfem
            - re0
            - fpc0
            - localre
        domain:
            None
        fqdn:
            spine01
        hostname:
            spine01
        hostname_info:
            ----------
            fpc0:
                spine01
        ifd_style:
            CLASSIC
        junos_info:
            ----------
            fpc0:
                ----------
                object:
                    ----------
                    build:
                        16
                    major:
                        - 17
                        - 4
                    minor:
                        1
                    type:
                        R
                text:
                    17.4R1.16
        master:
            RE0
        model:
            VQFX-10000
        model_info:
            ----------
            fpc0:
                VQFX-10000
        personality:
            None
        re_info:
            ----------
            default:
                ----------
                0:
                    ----------
                    last_reboot_reason:
                        Router rebooted after a normal shutdown.
                    mastership_state:
                        master
                    model:
                        QFX Routing Engine
                    status:
                        Absent
                default:
                    ----------
                    last_reboot_reason:
                        Router rebooted after a normal shutdown.
                    mastership_state:
                        master
                    model:
                        QFX Routing Engine
                    status:
                        Absent
        re_master:
            ----------
            default:
                0
        serialnumber:
            62861517157
        srx_cluster:
            None
        srx_cluster_id:
            None
        srx_cluster_redundancy_group:
            None
        switch_style:
            VLAN_L2NG
        vc_capable:
            True
        vc_fabric:
            False
        vc_master:
            0
        vc_mode:
            Enabled
        version:
            17.4R1.16
        version_RE0:
            None
        version_RE1:
            None
        version_info:
            ----------
            build:
                16
            major:
                - 17
                - 4
            minor:
                1
            type:
                R
        virtual:
            None
    out:
        True
root@master01:~#

As you can see we have a huge list of things collected.
       
root@master01:~# salt 'leaf02*' 'junos.cli' 'show version brief'
leaf02:
    ----------
    message:

        fpc0:
        --------------------------------------------------------------------------
        Hostname: leaf02
        Model: vqfx-10000
        Junos: 17.4R1.16 limited
        JUNOS Base OS boot [17.4R1.16]
        JUNOS Base OS Software Suite [17.4R1.16]
        JUNOS Crypto Software Suite [17.4R1.16]
        JUNOS Online Documentation [17.4R1.16]
        JUNOS Kernel Software Suite [17.4R1.16]
        JUNOS Packet Forwarding Engine Support (qfx-10-f) [17.4R1.16]
        JUNOS Routing Software Suite [17.4R1.16]
        JUNOS jsd [i386-17.4R1.16-jet-1]
        JUNOS SDN Software Suite [17.4R1.16]
        JUNOS Enterprise Software Suite [17.4R1.16]
        JUNOS Web Management [17.4R1.16]
        JUNOS py-base-i386 [17.4R1.16]
        JUNOS py-extensions-i386 [17.4R1.16]
    out:
        True
root@master01:~#

root@master01:~# salt 'spine*' 'junos.cli' 'show interface terse xe*'
spine01:
    ----------
    message:

        Interface               Admin Link Proto    Local                 Remote
        xe-0/0/0                up    up
        xe-0/0/0.0              up    up   inet     1.0.0.2/30
        xe-0/0/1                up    up
        xe-0/0/1.0              up    up   inet     2.0.0.2/30
        xe-0/0/2                up    up
        xe-0/0/2.0              up    up   inet     3.0.0.2/30
        xe-0/0/3                up    up
        xe-0/0/3.0              up    up   eth-switch
        xe-0/0/4                up    up
        xe-0/0/4.16386          up    up
        xe-0/0/5                up    up
        xe-0/0/5.16386          up    up
        xe-0/0/6                up    up
        xe-0/0/6.16386          up    up
        xe-0/0/7                up    up
        xe-0/0/7.16386          up    up
        xe-0/0/8                up    up
        xe-0/0/8.16386          up    up
        xe-0/0/9                up    up
        xe-0/0/9.16386          up    up
        xe-0/0/10               up    up
        xe-0/0/10.16386         up    up
        xe-0/0/11               up    up
        xe-0/0/11.16386         up    up
    out:
        True
spine02:
    ----------
    message:

        Interface               Admin Link Proto    Local                 Remote
        xe-0/0/0                up    up
        xe-0/0/0.0              up    up   inet     1.0.0.6/30
        xe-0/0/1                up    up
        xe-0/0/1.0              up    up   inet     2.0.0.6/30
        xe-0/0/2                up    up
        xe-0/0/2.0              up    up   inet     3.0.0.6/30
        xe-0/0/3                up    up
        xe-0/0/3.0              up    up   eth-switch
        xe-0/0/4                up    up
        xe-0/0/4.16386          up    up
        xe-0/0/5                up    up
        xe-0/0/5.16386          up    up
        xe-0/0/6                up    up
        xe-0/0/6.16386          up    up
        xe-0/0/7                up    up
        xe-0/0/7.16386          up    up
        xe-0/0/8                up    up
        xe-0/0/8.16386          up    up
        xe-0/0/9                up    up
        xe-0/0/9.16386          up    up
        xe-0/0/10               up    up
        xe-0/0/10.16386         up    up
        xe-0/0/11               up    up
        xe-0/0/11.16386         up    up
    out:
        True

root@master01:~#

This is what happens on the vQFX. Please note that it is actually doing an rpc call to the switch.
       
Apr 21 19:51:44  spine01 mgd[5019]: UI_CMDLINE_READ_LINE: User 'lab', command 'load-configuration rpc rpc commit-configuration check commit-configuration rpc rpc commit-configuration rpc rpc file-list path /dev/null path file-list rpc rpc file-list path /dev/null path file-list rpc rpc file-list path /dev/null path file-list rpc rpc file-list path /dev/null path file-list rpc rpc file-list path /dev/null path file-list rpc rpc file-list path /dev/null path file-list rpc rpc command show interface terse xe* '

Apr 21 19:51:44  spine01 mgd[5019]: UI_NETCONF_CMD: User 'lab' used NETCONF client to run command 'get-interface-information level-extra=terse interface-name=xe*'

Now let’s change some configuration on the switch. Let’s change the hostname of the 'spine01' switch to 'spine0001'
       
root@master01:~# salt 'spine01' 'junos.set_hostname' 'hostname=spine0001' 'commit_change=True'
spine01:
    ----------
    message:
        Successfully changed hostname.
    out:
        True
root@master01:~#
** {master:0}[edit] lab@spine01# *** messages *** Apr 21 19:56:26 spine01 mgd[5019]: UI_COMMIT: User 'lab' requested 'commit' operation (comment: none) Apr 21 19:56:26 spine01 mgd[5019]: UI_COMMIT_NO_MASTER_PASSWORD: No 'system master-password' set Apr 21 19:56:27 spine01 mgd[5019]: UI_CHILD_EXITED: Child exited: PID 9609, status 7, command '/usr/sbin/mustd' Apr 21 19:56:27 spine01 rpd[9633]: mpls_label_alloc_mode_new TRUE Apr 21 19:56:27 spine01 l2cpd[9635]: ppmlite_var_init: iri instance = 36735 Apr 21 19:56:28 spine01 mgd[5019]: UI_COMMIT: User 'lab' requested 'commit' operation (comment: none) Apr 21 19:56:28 spine01 mgd[5019]: UI_COMMIT_NO_MASTER_PASSWORD: No 'system master-password' set Apr 21 19:56:28 spine01 mgd[5019]: UI_CHILD_EXITED: Child exited: PID 9642, status 7, command '/usr/sbin/mustd' Apr 21 19:56:29 spine01 rpd[9666]: mpls_label_alloc_mode_new TRUE Apr 21 19:56:29 spine01 l2cpd[9668]: ppmlite_var_init: iri instance = 36735 Apr 21 19:56:30 spine0001 mgd[5019]: UI_COMMIT_COMPLETED: commit complete {master:0}[edit] lab@spine0001#
And the hostname is changed.
In the next part we will play with some event driven capabilities of the salt system with Juniper devices

***End of Part 2***

Click Here To Read Rest Of The Post...

Friday, April 20, 2018

Using Salt with Network Devices - Part 1

Introduction:
Salt is an orchestration system developed by a company called SaltStack (https://saltstack.com/). The Salt software is for complex systems management at scale. Salt is based on Python. Salt comes in two flavors:
1) Salt Open Project
2) Salt Enterprise

Salt is easy enough to get running in minutes, scalable enough to manage tens of thousands of servers, and fast enough to communicate with them in seconds. Similarly, Salt can be used to manage network devices effectively. Salt has remote execution capabilities which allows us to run commands on various machines in parallel with flexible targeting system. In this post we will touch base on the basics of Salt and its installation.

Working Model:
Salt follows a client server model:
1) Master – is the server
2) Minion – is the client

It is possible to have multiple minions to connect to a single master. The communication between master and minion is secured and they use dynamically generated keys before anything else. The entire operational model is built on a dynamic communication bus which is ZeroMQ. Sometimes it is also refereed as pub-sub model. The salt system follows a very strict directory structure. By default the files are expected to be in "/etc/salt" folder and "/srv" folder. However, default directory structure can be changed. We will see the use of these folders in subsequent posts.

Other that these there are a few more components like:
1) Grains – the static information about the minion like OS name, Memory, Model No etc
2) Execution Modules – Ad hoc commands which can be executed from master to one or more target minions like ‘disk usage’, ‘ping’ etc
3) Pillar – stores data related to Minion like host information, IP address, user-credentials etc
There are a few more components which we will talk about in the future posts. Since network devices have propriety operating systems, hence it is not possible to make them minions. To resolve this issue, there is a concept of proxy-minion.

In this case the master will talk to the network devices via minions. 

Installation:
Now lets do the installation of both master and minion. For simplicity we will use one master and one minion. The same minion will be used later on as proxy-minion. Before beginning the installation, it is assumed that the user is familiar with Linux (Ubuntu / Centos etc.) and few other things like git and python-pip. We will be using Ubuntu 16.04 (xenial) for this installation. For other linux platforms, the installation will be very similar. Even though not mandatory, but it’s always better to have the master and minion to be synced to the same ntp server.
Here is the master

        
root@master01:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial
root@master01:~# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 172.28.16.17    .POOL.          16 p    -   64    0    0.000    0.000   0.000
*172.28.16.17   .GPS.            1 u  891 1024  377  305.662   -0.067   5.264
root@master01:~#

And here is the minion
        
root@minion01:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial
root@minion01:~# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 172.28.16.17    .POOL.          16 p    -   64    0    0.000    0.000   0.000
*172.28.16.17   .GPS.            1 u   56   64   37  301.813   -0.886   1.007
root@minion01:~#

Even though salt has its own repo on the github but we will use a forked version of the repo. The forked version is based on Nitrogen release of salt and it is available at here (https://github.com/vnitinv/salt). This repo is managed by Juniper.
The installation of salt on both master and minion is identical. Hence for simplicity I am only showing it on master.
       
 
root@master01:~# pip install git+https://github.com/vnitinv/salt

Collecting git+https://github.com/vnitinv/salt
  Cloning https://github.com/vnitinv/salt to /tmp/pip-BOGbIe-build
Collecting Jinja2 (from salt===2017.7.0-693-ga5f96e6)
  Downloading https://files.pythonhosted.org/packages/7f/ff/ae64bacdfc95f27a016a7bed8e8686763ba4d277a78ca76f32659220a731/Jinja2-2.10-py2.py3-none-any.whl (126kB)
    100% |████████████████████████████████| 133kB 390kB/s
Collecting msgpack-python>0.3 (from salt===2017.7.0-693-ga5f96e6)
  Downloading https://files.pythonhosted.org/packages/8a/20/6eca772d1a5830336f84aca1d8198e5a3f4715cd1c7fc36d3cc7f7185091/msgpack-python-0.5.6.tar.gz (138kB)
    100% |████████████████████████████████| 143kB 423kB/s
Collecting PyYAML (from salt===2017.7.0-693-ga5f96e6)
  Downloading https://files.pythonhosted.org/packages/4a/85/db5a2df477072b2902b0eb892feb37d88ac635d36245a72a6a69b23b383a/PyYAML-3.12.tar.gz (253kB)
    100% |████████████████████████████████| 256kB 399kB/s
Collecting MarkupSafe (from salt===2017.7.0-693-ga5f96e6)
  Downloading https://files.pythonhosted.org/packages/4d/de/32d741db316d8fdb7680822dd37001ef7a448255de9699ab4bfcbdf4172b/MarkupSafe-1.0.tar.gz
Collecting requests>=1.0.0 (from salt===2017.7.0-693-ga5f96e6)
  Downloading https://files.pythonhosted.org/packages/49/df/50aa1999ab9bde74656c2919d9c0c085fd2b3775fd3eca826012bef76d8c/requests-2.18.4-py2.py3-none-any.whl (88kB)
    100% |████████████████████████████████| 92kB 34kB/s
Collecting tornado==4.5.3 (from salt===2017.7.0-693-ga5f96e6)
  Downloading https://files.pythonhosted.org/packages/e3/7b/e29ab3d51c8df66922fea216e2bddfcb6430fb29620e5165b16a216e0d3c/tornado-4.5.3.tar.gz (484kB)
    100% |████████████████████████████████| 491kB 307kB/s
Collecting futures>=2.0 (from salt===2017.7.0-693-ga5f96e6)
  Downloading https://files.pythonhosted.org/packages/2d/99/b2c4e9d5a30f6471e410a146232b4118e697fa3ffc06d6a65efde84debd0/futures-3.2.0-py2-none-any.whl
Collecting pycrypto>=2.6.1 (from salt===2017.7.0-693-ga5f96e6)
  Downloading https://files.pythonhosted.org/packages/60/db/645aa9af249f059cc3a368b118de33889219e0362141e75d4eaf6f80f163/pycrypto-2.6.1.tar.gz (446kB)
    100% |████████████████████████████████| 450kB 366kB/s
Collecting pyzmq>=2.2.0 (from salt===2017.7.0-693-ga5f96e6)
  Downloading https://files.pythonhosted.org/packages/5d/b0/3aea046f5519e2e059a225e8c924f897846b608793f890be987d07858b7c/pyzmq-17.0.0-cp27-cp27mu-manylinux1_x86_64.whl (3.0MB)
    100% |████████████████████████████████| 3.0MB 149kB/s
Collecting certifi>=2017.4.17 (from requests>=1.0.0->salt===2017.7.0-693-ga5f96e6)
  Downloading https://files.pythonhosted.org/packages/7c/e6/92ad559b7192d846975fc916b65f667c7b8c3a32bea7372340bfe9a15fa5/certifi-2018.4.16-py2.py3-none-any.whl (150kB)
    100% |████████████████████████████████| 153kB 423kB/s
Collecting chardet<3 .1.0="">=3.0.2 (from requests>=1.0.0->salt===2017.7.0-693-ga5f96e6)
  Downloading https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl (133kB)
    100% |████████████████████████████████| 143kB 442kB/s
Collecting idna<2 .7="">=2.5 (from requests>=1.0.0->salt===2017.7.0-693-ga5f96e6)
  Downloading https://files.pythonhosted.org/packages/27/cc/6dd9a3869f15c2edfab863b992838277279ce92663d334df9ecf5106f5c6/idna-2.6-py2.py3-none-any.whl (56kB)
    100% |████████████████████████████████| 61kB 626kB/s
Collecting urllib3<1 .23="">=1.21.1 (from requests>=1.0.0->salt===2017.7.0-693-ga5f96e6)
  Downloading https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl (132kB)
    100% |████████████████████████████████| 133kB 411kB/s
Collecting singledispatch (from tornado==4.5.3->salt===2017.7.0-693-ga5f96e6)
  Downloading https://files.pythonhosted.org/packages/c5/10/369f50bcd4621b263927b0a1519987a04383d4a98fb10438042ad410cf88/singledispatch-3.4.0.3-py2.py3-none-any.whl
Collecting backports_abc>=0.4 (from tornado==4.5.3->salt===2017.7.0-693-ga5f96e6)
  Downloading https://files.pythonhosted.org/packages/7d/56/6f3ac1b816d0cd8994e83d0c4e55bc64567532f7dc543378bd87f81cebc7/backports_abc-0.5-py2.py3-none-any.whl
Collecting six (from singledispatch->tornado==4.5.3->salt===2017.7.0-693-ga5f96e6)
  Downloading https://files.pythonhosted.org/packages/67/4b/141a581104b1f6397bfa78ac9d43d8ad29a7ca43ea90a2d863fe3056e86a/six-1.11.0-py2.py3-none-any.whl
Building wheels for collected packages: msgpack-python, PyYAML, MarkupSafe, tornado, pycrypto
  Running setup.py bdist_wheel for msgpack-python ... done
  Stored in directory: /root/.cache/pip/wheels/d5/de/86/7fa56fda12511be47ea0808f3502bc879df4e63ab168ec0406
  Running setup.py bdist_wheel for PyYAML ... done
  Stored in directory: /root/.cache/pip/wheels/03/05/65/bdc14f2c6e09e82ae3e0f13d021e1b6b2481437ea2f207df3f
  Running setup.py bdist_wheel for MarkupSafe ... done
  Stored in directory: /root/.cache/pip/wheels/33/56/20/ebe49a5c612fffe1c5a632146b16596f9e64676768661e4e46
  Running setup.py bdist_wheel for tornado ... done
  Stored in directory: /root/.cache/pip/wheels/72/bf/f4/b68fa69596986881b397b18ff2b9af5f8181233aadcc9f76fd
  Running setup.py bdist_wheel for pycrypto ... done
  Stored in directory: /root/.cache/pip/wheels/27/02/5e/77a69d0c16bb63c6ed32f5386f33a2809c94bd5414a2f6c196
Successfully built msgpack-python PyYAML MarkupSafe tornado pycrypto
Installing collected packages: MarkupSafe, Jinja2, msgpack-python, PyYAML, certifi, chardet, idna, urllib3, requests, six, singledispatch, backports-abc, tornado, futures, pycrypto, pyzmq, salt
  Running setup.py install for salt ... done
Successfully installed Jinja2-2.10 MarkupSafe-1.0 PyYAML-3.12 backports-abc-0.5 certifi-2018.4.16 chardet-3.0.4 futures-3.2.0 idna-2.6 msgpack-python-0.5.6 pycrypto-2.6.1 pyzmq-17.0.0 requests-2.18.4 salt-2017.7.0-693-ga5f96e6 singledispatch-3.4.0.3 six-1.11.0 tornado-4.5.3 urllib3-1.22
root@master01:~#

We can check the Salt Version on both master and minion.
        
root@master01:~# salt --version
salt 2017.7.0-693-ga5f96e6 (Nitrogen)

root@minion01:~# salt --version
salt 2017.7.0-693-ga5f96e6 (Nitrogen)

For any questions please comment below.
****End of Part 1**** Part-2 available here

Click Here To Read Rest Of The Post...

Sunday, April 8, 2018

CUPS: Control User Plane Separation


Telco’s user data traffic is getting doubled every year due to the proliferation of OTT video, social media, gaming and use of smart devices. This exponential growth in the mobile traffic has led lot of architectural changes which are aligned to SDN and NFV technology. At the same time OTTs are taking advantage because Telco’s are building network and serving 70% of OTT traffic only. At the same time, there is strong demand of serving OTT traffic with low latency, high throughput and best customer experience.

To serve all these requirements, Telco’s has to penetrate deeper and deeper in the region and create more EPC locations which means number of new users are not increasing as compare to the traffic demands.

Below is the current network architecture of LTE


During the launch of LTE, this was the best in breed architecture but as the traffic demands grow the intermediator or inline nodes are becoming bottle neck and creating head of Line blocking. CUPS - Control Plane User Plane Separation solves the issue and gave a new architecture approach which is easy to implement and leverages the SDN and NFV technologies so that all the SLA’s and KPI’s can be met.


In nutshell, CUPS allows for:
1. Reducing latency on applications and OTT/Video traffic.
2. Leveraging SDN to deliver Data Plane more efficiently and better scaling in Control Plane.
3. Supporting Increase of Data Traffic, by enabling to add user plane nodes without changing the number of SGW-C, PGW-C and TDF-C in the network.


Click Here To Read Rest Of The Post...

Saturday, April 7, 2018

Network Slicing in 5G


Network slicing is a kind of virtual network architecture, which leverages the principles behind network functions virtualization (NFV) and software-defined networking (SDN). Network slicing allows telecom operators to slice a physical network into multiple virtual networks. From a mobile operator’s point of view, a network slice is an independent end-to-end logical network that runs on a shared physical infrastructure, capable of providing a negotiated service quality. The technology enabling network slicing is transparent to business customers. The virtual networks are then tailored to meet the needs of specific applications and services.

SDN and NFV will play vital role in network slicing. NFV provides the network functions like routing, firewall, load balancer etc. disaggregate from the dedicated OEM appliance and can be host on COTS hardware. The OEM dependency on supplying hardware, elasticity and faster time to market is the key to leverage NFV. SDN on the other hand is use to manage the network flows from the centralized controller sitting in data centers. The main role of the SDN is to provide on demand services without any kind of manual intervention.

5G is all about of providing connectivity to massive IOT devices (Industrial Slice or IOT Slicing), enhance Mobile Broadband for AI, ML and handling Video Traffic (Smartphone Slice) and providing access to low latency devices like Cars (Autonomous Driving Slice). Network Slicing can be achieved by using flex algo along with segment routing.

Network slicing will heavily be used in 5G networks to permit business customers to enjoy seamless connectivity and data processing tailored to the specific business requirements that adhere to a Service Level Agreement (SLA) agreed with the mobile operator. The customizable network capabilities include data speed, quality, latency, reliability, security, and services.

Click Here To Read Rest Of The Post...

Thursday, April 5, 2018

BlockChain For Telecommuniations


OpenCT is disrupting the telecommunications industry via its twofold go-to-market approach. The company is positioned to be an innovative and transparent telecommunications provider within the blockchain ecosystem. OpenCT is also building a scalable, high-performance platform that provides the stability and reliability from which transformative blockchain-based applications can be designed to solve industry-specific challenges.


The OpenCT platform is based on a hybrid blockchain model, meaning it is both public and private, allowing for a diverse range of adoption, users, and clients. OpenCT's custom-developed mining algorithm is called Proof of Duration (PoD). When used in combination with Proof of Stake (PoS), it grants miners the benefit of a far faster, more democratic, and energy efficient approach. The OpenCT Token (OCT) fuels the interactions of the telecommunications-specific blockchain. OCT acts as a service enabler which clients of the platform can use to unlock and enjoy any service offered over the platform. Most importantly, with the smart technology being used to power the OpenCT platform, the block production rate works at an exceptionally fast rate of 100,000 Transactions Per Second (TPS).

In addition to launching the OpenCT platform, the company is also introducing two inaugural applications that work to address significant telco service pain points:
Blockchain as a Transport (BaaT): BaaT is a leading network technology that is well-positioned to become the transport service of choice for all businesses because of its highly secure and cost-effective ability to leverage the Internet by breaking down barriers that have limited popular services to a single data center, a single autonomous system, or a single carrier.

Blockchain-Defined Wide Area Networks (BD-WAN): BD-WAN will have the unique ability to establish and tear-down logical and physical circuits of any capacity, seamlessly and transparently, thereby enabling real-time billing based solely on bandwidth usage.

"Throughout the world, data transmission, digital communication and the transferring of data from point A to point B are key functions and the lifeline that gives most, if not all, businesses and industries the ability to prosper and exist," comments Mayande Walker, Chief Operating Officer of OpenCT. "OpenCT will provide a new process, innovate conveyance and offer an effective solution to those who have challenges with costly connectivity giving the entire telco community an option for secure, peer-to-peer financial transactions without the use of any third party or financial institutions."

OpenCT (https://www.openct.io/) is taking blockchain to new heights, ensuring the adoption of this industry disrupting solution is fully supportive of service providers, enterprises and carriers alike.


Click Here To Read Rest Of The Post...