Wednesday, January 17, 2018

PCE vs OPENFLOW Controller


In general networking control plane is used to exchange the destination information whereas data plane is used to program the control plane information in the local stack along with the information how to reach to local neighbors. But in the world of SDN networking, SDN is always referred as separation of control and data plane. At high level, SDN controller can be OPENFLOW or PCE based controller.

But most of Network Planning Engineers always stumble to understand which controller is best for the network.

OpenFlow, where centralizing the control plane of the network usually requires full upgrades and/or replacements of significant parts of the network, PCE rather introduces an evolutionary approach towards a centralized control of the network infrastructure. In the beginning, only the edge layer of the network needs to support PCEP, while the network may continue using traditional ways of signaling (such as RSVP-TE) and same schemes for mapping traffic to paths at the edges, as shown in the figure below. There is no need for controller to communicate to all network elements in the path, as it is the case with OpenFlow

So in nutshell, if you are planning to deploy OPENFLOW based controller you have to upgrade the entire network and has to manage the state of OPENFLOW in each and every router where as in PCE based controller only edge routers need to support the PCE rest network upgradation is not required.

PCE vs OPENFLOW

Click Here To Read Rest Of The Post...

Sunday, January 14, 2018

Scale SDN Applications with Micro Services Architecture


Elasticity is one of the prerequisite of any SDN application if it has to scale out horizontally. Scaling the entire SDN application does not make sense when the scale requirement is only for few of the services. However, if the SDN application is written in monolithic architecture in that the entire code has to recompile just for few scale requirement. Monolithic Architecture becomes head of line blocking in case the application has to scale out.

As per Wikipedia, “A software system is called "MONOLITHIC" if it has a monolithic architecture, in which functionally distinguishable aspects (for example data input and output, data processing, error handling, and the user interface) are all interwoven, rather than containing architecturally separate components.”

Monolithic application always build as single unit which means in case of change of any small code; requires the recompilation of the entire code. Below are the challenges if the applications are deployed with Monolithic Architecture:-
1. Scaling is one of the biggest challenge of Monolithic Applications
2. Slow speed and processing
3. Monolithic applications are implemented as single development stack

Monolithic Architecture


By contrast, micro services are modular in terms of supporting any kind of business requirements. Monolithic code can be divided into smaller parts known as micro services. Micro Services can communicate to each other by using RPC calls. This architecture has below listed benefits as compare to monolithic architecture:-
1. Easy to scale
2. Not dependent on single development stack. Every micro service can be written in any kind of language.
3. Upgradation of any micro service will not affect other micro service.
4. Microservices based architecture can result in far more efficient use of code and underlying infrastructure
5. Easier to implement and faster time to market
6. Provides operational efficiency as dev ops team can focus on updating only on relevant micro service rather than on the entire code.



Click Here To Read Rest Of The Post...

Wednesday, December 27, 2017

Understanding Back propagation In Neural Networks


I wanted to ask a question, when newborn baby born does he able to think and start recognizing the things at day 1. The answer is no because baby has to undergo a training process at every second that let him or her know that this is your mother, father, brother and sisters. Once this training is completed, the connection between the neurons become so strong; easily he or she start recognizing his family members.

But what happens if someone try to show the earlier known faces with some resembling faces like sister of mother who is not mother but resembles like mother? The baby tries to relate the existing images with the older images of mother and figure out that this is not my mother but exactly looks like mother. The entire process of rethinking and making it correct thinking known as back propagation.

Neural Network Mathematics explained how does neural networks can be trained by using simple algorithms. Back propagation is the one of the good way to let your connections know that the current given weight and bias value is not good and we need to change it to get better results.

Let’s imagine a three layer neuron network as below shown in the image with “w” as weights and “b” as bias. These are random numbers or we can use Gaussian method also to populate these numbers.



In order to train a network we need to define error or loss function between its output and its desired output as “d” which network is supposed to return. Here we are defining the cost function as mini squared error. There are other methods also to calculate the error but the basic principle will remain be the same.



The objective of the loss function is to provide more accuracy with minimum loss at any given point of time. Once we know the loss, after that we start calculating the gradient of the error of the network with respect to the network modifiable weights. So in short back propagation is nothing but to adjust the weights and bias of the exiting network and provide the desired output which matches the test output.


Click Here To Read Rest Of The Post...