Showing posts with label Machine Learning. Show all posts
Showing posts with label Machine Learning. Show all posts

Sunday, May 10, 2020

Machine Learning For New Comers


Sharing collection of machine learning algorithms which can be used by anyone. The algorithms are very good for new comers who are willing to learn machine learning without having any prior knowledge. I have used SCIKIT Learn library in python to write the basic code

What is machine learning?

Difference Between Artificial Intelligence, Machine Learning and Deep Learning

All About Machine Learning Algorithms: When To Use Which Machine Learning Algorithm

What is neuron and artificial neuron in deep learning?

What is Perceptron in Deep Learning?

Loading First Data Set In Machine Learning

Predicting Values By Using k-nearest Neighbors Machine Learning Algorithm

Predict Prices By Using Linear Regression Algorithm In Machine Learning

Predict Probability By Using Logistic Regression In Machine Learning

When To Use Softmax Activation Algorithm In Deep Learning

Architecture Of Predicting Hand Written Digits

What is Data Pre-Processing In Machine Learning?

Rectified Linear Unit Activation Function In Deep Learning

Neural Network Mathematics

Understanding Back propagation In Neural Networks

Machine Learning Use Cases in Networking


Click Here To Read Rest Of The Post...

Sunday, September 29, 2019

Machine Learning Use Cases in Networking


Click Here To Read Rest Of The Post...

Wednesday, December 27, 2017

Understanding Back propagation In Neural Networks


I wanted to ask a question, when newborn baby born does he able to think and start recognizing the things at day 1. The answer is no because baby has to undergo a training process at every second that let him or her know that this is your mother, father, brother and sisters. Once this training is completed, the connection between the neurons become so strong; easily he or she start recognizing his family members.

But what happens if someone try to show the earlier known faces with some resembling faces like sister of mother who is not mother but resembles like mother? The baby tries to relate the existing images with the older images of mother and figure out that this is not my mother but exactly looks like mother. The entire process of rethinking and making it correct thinking known as back propagation.

Neural Network Mathematics explained how does neural networks can be trained by using simple algorithms. Back propagation is the one of the good way to let your connections know that the current given weight and bias value is not good and we need to change it to get better results.

Let’s imagine a three layer neuron network as below shown in the image with “w” as weights and “b” as bias. These are random numbers or we can use Gaussian method also to populate these numbers.



In order to train a network we need to define error or loss function between its output and its desired output as “d” which network is supposed to return. Here we are defining the cost function as mini squared error. There are other methods also to calculate the error but the basic principle will remain be the same.



The objective of the loss function is to provide more accuracy with minimum loss at any given point of time. Once we know the loss, after that we start calculating the gradient of the error of the network with respect to the network modifiable weights. So in short back propagation is nothing but to adjust the weights and bias of the exiting network and provide the desired output which matches the test output.


Click Here To Read Rest Of The Post...

Wednesday, December 13, 2017

Neural Network Mathematics


Perceptron is nothing but it’s a type of artificial neuron. Artificial neuron is nothing but it mimics our brain as explained in my previous post.

Now we need to understand what’s the magic behind the neural network to predict any kind of output basis on the input. To get it’s better understanding, we should be knowing basic the sigmoid function and gradient decent algorithm.

I have explained the high level architecture of predicting the hand written numbers. But this time, I will be more focusing on the various types of process which comes from input to predict the output.

Below is the pictorial view of the stages of the neural network and will try to add the code in the upcoming post.

Stage 1: Define the inputs, weights, bias and label the outputs.



Stage 2: Summation of all inputs and add the bias

Stage 3: Calculate the forward pass

Stage 4: Calculate the backward pass and updates the weights

Stage 5: Repeat the process till we get the desired output.


Click Here To Read Rest Of The Post...

Monday, December 11, 2017

Rectified Linear Unit Activation Function In Deep Learning


Sigmoid Activation function provides S shape curve graph and the values are always between 0 and 1. Sigmoid function converts the large negative values to 0 and high positive values to 1.

But with Sigmoid Activation function, we have major drawback is that it’s value is 0 near 0 and 1 which means during the back propagation method the weights values will never ever change and we will not get any output. Secondly, Sigmoid Activation Function faces the gradient vanishing problem at 0 and 1.

Below is the image of Sigmoid Curve:


To get rid from the above issues, we can use Rectified Linear Unit Activation Function which is also known as RELU. RELU function has range between 0 and Infinity. Hence Sigmoid Activation function can be used to predict the values between 0 and 1 whereas can be used to model real positive number. The best of RELU is that whenever we increase the input values of X, the gradient also changes.

RELU can be defined with the simple below mentioned Mathematical notation:
RELU(x) = MAX (0, x)
The functions says if the input value is 0 or less than 0, RELU will return 0 else it will return the input value of x.

Below is the image of Sigmoid Curve:


Click Here To Read Rest Of The Post...

Saturday, December 9, 2017

What is Data Pre-Processing In Machine Learning?


Data Pre-Processing means adding some type of mathematical operation without losing the content of the data. Let’s take example, we want to do the dimensionality reduction so that we can visualize the data more efficiently in 2D graph. It means we need to have some kind of pre-processing of on same data so that we can drop some data without losing it’s actual meaning. Let’s take another example from the previous post and get deeper understanding of data pre-processing. Below is the matrix or dataset which has price of pizza in INR which is fully dependent on it’s size. In India, the price is in INR, but for United States it will be Dollars and for Dubai it must be in Dirham. For size in India it is in inches, but in United States it might be in Centimeters. But if we are developing some kind of relation in that case we can pre-process the data and fit everything between 0 to 1. By doing this, our dependency on INR, Dollars and Inches is completely gone off.
In this case, we can take the maximum and minimum value of every column and apply the below formula on the existing data. By doing this we get the new form of data whose values will be lying between 0 and 1 without losing its actual importance.

New Pre-Process data will look alike below


The advantage of pre-processing of data is that now we can fit anything between 0 and 1 which means unit square. Below is the comparison of before pre-processing and after pre-processing of that, now we can have good visibility and everything is well fitted in unit square.

Click Here To Read Rest Of The Post...

Wednesday, December 6, 2017

Architecture Of Predicting Hand Written Digits


Logistic regression and Softmax Activation functions are the most important functions which really help us to predict the handwritten digits from the MNIST dataset. I am using Keras to predict the numbers but for this post, I have created a high level diagram which helps everyone to understand which function is required at which layer to predict the MNIST hand written digit numbers.

Click Here To Read Rest Of The Post...

Monday, December 4, 2017

When To Use Softmax Activation Algorithm In Deep Learning


In my previous post, we have discussed how to use simple linear regression method if we have target class which is directly dependent on input class. In case of finding single value out of two, which is normally called binary distribution in that case we have to use the logistic regression method.

In today’s post, I am discussing more on the softmax activation function which is used when we have to predict the event over n number of different events. Softmax function calculates the probability of each target class over the possible target classes for the given inputs.

Let’s understand it from example that we have list of hand written digits and we want to predict out of 10 (0 - 9) digits which digit has high probability to appear.In this case, we have 10 target classes and out of 10 target classes we have to show a single class which is nothing but a single digit. In this case, we will use the softmax activation function.

I have use the below code to create softmax function graph for the numbers starting from 0 to 10 and found higher number has high probability to come. So it means we can use this activation function in deep learning or in neural network while predicting the target class out of multiple target classes.

Use the below code to see the basic functionality of softmax activation function
import numpy as np
import matplotlib.pyplot as plt

def softmax(add_inputs):
  #Softmax Equation
  return np.exp(add_inputs) / float(sum(np.exp(add_inputs)))

def line_graph(x, y, x_title, y_title):
  plt.plot(x, y)
  plt.xlabel(x_title)
  plt.ylabel(y_title)
  plt.show()

x = range(0, 10)
y = softmax(x)
line_graph(x, y, "Inputs", "Softmax Probability")

Click Here To Read Rest Of The Post...

Sunday, December 3, 2017

Predict Probability By Using Logistic Regression In Machine Learning


Logistic regression is used to predict the outcome variable which is categorical. A categorical variable is a variable that can take only specific and limited values like gender male or female, yes or not etc.

We have example of students who has studied for specific hours and basis on that they are marked as pass or fail.

Below is the dataset used for the example:
In the previous post, we have seen how to use linear regression method to solve the problem. Let’s use the same linear regression method for the above dataset and plot it.

As per the graph, we can’t see any relation between the pass and fail with the number of hours studied. But let’s try to plot by using our equation of line as used in the previous post.

As per the above output, the linear regression is predicting all the values starting between 0 and more than 1. But we need our answer either in 0 or 1. The predictions given by linear regression algorithm is not matching what we are looking for. So it means we need a better regression line than this which can help us provide the output either on 0 or 1. Not less than 0 and not more than 1.

So logistic regression seems to be the right choice for this example. Most often we want to predict the outcomes in yes or no. In that case we can apply the logistic regression algorithm and get the desired outcome. Logistic regression outcomes always falls between 0 to 1 and it predicts the outcomes in terms of probability also. The more the probability is the more accurate the outcome would be. This can be achieved by using Logistic Function.

Logistic Function is given by
Where L is the maximum Curve’s value, K is the steepness of the curve and x0 is x value of sigmoid’s midpoint.

A standard logistic function is called sigmoid function and let’s substitute the below values in the logistic functions and see what’s the result would be.
K = 1, x0 = 0, L = 1 If we substitute all the above values in the logistic function, we get the below function which is nothing but a sigmoid.

Let’s draw the sigmoid curve and see how it looks alike. The sigmoid is not only using to classify 0 or 1 but along with this it is also telling the probability of certain event whether it is going to occur or not.

Now let’s solve the above example with Logistic Regression and see how the curve looks like.

Now I am trying to predict if student I studying for 8.1 hours will he be fail or pass and what would be its probabilities.

I am getting the answer that this student will be having passing probability of 80% and fail probability of 20%.


Click Here To Read Rest Of The Post...

Friday, December 1, 2017

Predict Prices By Using Linear Regression Algorithm In Machine Learning


Simple linear regression is a statistical method that allows us to summarize and study relationships between two continuous variables where the output of variable is directly proportional to input. The variable which we will be predicting is called criterion variable which is along the Y axis and the variable which we are basing our predictions is known as predictor which is along the X axis. When there is single predictor variable that method is called simple linear regression method and when we have more than 1 predictor that model is called multiple linear regression method.

Statistical relation has lot of examples like height and weight, speed and petrol consumption, router bandwidth and router cpu relation, pizza size and price relation etc.

In the previous post, we have used the KNN neighbor algorithm to predict the values. In this post, we will be using the simple linear regression method to predict the pizza prices.

Before moving to the machine learning code, we need to first understand what equation of line is. The equation of line represents the relationship between the X and Y axis. Formula of finding equation of line is y = mx + c

C is Y intercept where the line meets the Y axis.

Slope of the line is nothing but the difference between the (Y2 – Y1)/(X2-X1)

With the above equation, if we get the values of m and c which remains constant and for every value of x we can get the value of y. Now let’s take example of pizza size and its price. First plot the variables with the existing dataset and after that we will use the simple linear regression to predict the pizza prices by giving pizza size.

Below is the graph of pizza size vs pizza price. This example is the right fit for simple linear regression method because as the size increase the price is also increasing. So we can say that there is direct relationship between pizza size and price.

Now we have to find the values of m and c, so that we can find predict any new price of any size of pizza. To get this done, we will be using the Scikit Learn library and import the linear regression model and find the slope and y intercept value.

Use the below python code in Jupyter Notebook and predict the price value of pizza sizes from 100 to 110.

import matplotlib.pyplot as plt
from sklearn import linear_model
pizza_size = [[4.0],[4.5],[5.0],[5.6],[5.8],[6.0],[7.2],[7.4],[7.8],[9]]
pizza_price = [42,45,50,55,58,62,65,70,72,75]
print("Pizza Size and Pizza Price")
for row in zip(pizza_size,pizza_price):
print(row[0],'->',row[1])

#Instantiating Linear Regression Model
reg = linear_model.LinearRegression()
reg.fit(pizza_size,pizza_price)

#storing the slope
m = reg.coef_
#storing y intercept
b = reg.intercept_

print("Slope Of the Line Is:", m)
print("Y intercept is:", b)

# This is used to plot the existing relationship on graph
plt.scatter(pizza_size,pizza_price,color='red')
#Equation of straight Line is y = m*x + b
#Now we know the m and b values, so we can predict the straight line points
predicted_values = [m * x + b for x in pizza_size]

#Plot the straight line
plt.plot(pizza_size,predicted_values,'r--')
plt.xlabel("pizza_size")
plt.ylabel("pizza_price")
plt.show()

#Predict the Pizza Size Prices from 100 to 110
for i in range(100,110):
print("The price of pizza will be:",reg.predict(i))


Click Here To Read Rest Of The Post...

Thursday, November 30, 2017

Predicting Values By Using k-nearest Neighbors Machine Learning Algorithm


The k-nearest neighbors algorithm (k-NN) is a non-parametric method used for classification and regression in machine learning. This is the simplest and easiest algorithm which can be used to predict the outcome values. K-NN is also known as lazy learning, which stores the training data and waits until given a test tuple.

K-Nearest Neighbor Algorithm Has Below Mentioned Steps:
1. Pick a value of K
2. Search for K observations in the training data that are nearest to the measurements of the unknown data
3. Use the most popular response value from the K nearest neighbors as presicted response value for the unknown data or iris data in our example.
In the previous post we have loaded our first dataset which is IRIS. Now use the same dataset to train our model by using KNN neighbor algorithm and predict the values or outcomes. Click here to check which algorithm is used when.

Follow the code as explained in the previous post to load the IRIS dataset by using Scikit Learn.

# Below code will load the 4 attributes of IRIS in x_train_data object. The same can be verifies by using shape command
x_train_data = iris.data
y_train_data_outcome = iris.target

#Import KNN classifier from sklearn
from sklearn.neighbors import KNeighborsClassifier

#Define the neighbors value, I have used 1 you can try with 2, 3 4 or 5 and check the results.
knn = KNeighborsClassifier(n_neighbors=1)
print(knn)

#Training the KNN Classifier by loading test samples which is nothing but the 4 attributes of iris.data and 1 outcome of iris target

knn.fit(x_train_data, y_train_data_outcome)

#Define predict_species with 4 attributes
predict_species = ([3,5,4,2])

# Using Knn predict method to predict the species class.
print(knn.predict([predict_species]))

You can change the n_neighbors value and see what happens.

Click Here To Read Rest Of The Post...

Wednesday, November 29, 2017

Loading First Data Set In Machine Learning


In my previous post, I have already explained the difference between the Deep Learning and Machine Learning. In this post, I going to talk more about how to load the data set in Scikit learn which is machine learning library for new comers.

This post is more focus on loading IRIS dataset from Scikit Learn library by using Jupyter notebook. In 1936 Edgar Anderson collected 50 samples of 3 different species of IRIS for each sample he measured the sepal length, sepal width, petal length and petal width and record the measurements along with its species.

Use the below code to load the CSV file in your Jupyter notebook directly from internet.
from IPython.display import HTML
Use this link to load it in your Jupyter Notebook http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data

Output: There are 150 rows but I have only pasted sample from each row.
5.1,3.5,1.4,0.2,Iris-setosa
7.0,3.2,4.7,1.4,Iris-versicolor
5.9,3.0,5.1,1.8,Iris-virginica

The above output is comma separated output starting with sepal length, sepal width, petal length and petal width. The dataset also covers the species setosa, versicolor and virginica. With this dataset we can predict the species of the flower. This problem is also known as supervised learning because we are trying to learn the relationship between the data namely the IRIS measurements and the outcome is the species of IRIS.

I am using Scikit learn library for writing machine learning code. You can also install it from here. The best way to learn the machine learning or deep learning is to use the anaconda package on your laptop. This has lot of inbuilt libraries which can help you to sharpen your skills with GUI. GUI is nothing but it’s Jupyter iPYTHON version.

The IRIS dataset is the most famous dataset in machine learning and it is already built in Scikit-Learn. So we don’t need to load it from anywhere else as it is already built in function in Scikit-Learn. Let’s load the dataset.

#import Load_iris function from SKlearn datasets
from sklearn.datasets import load_iris

#Save iris dataset and it’s attributes in object called iris
Iris = load_iris()
type(iris)

#Iris is a object which has special container called bunch which is scikit learn special object type for storing datasets and it’s attributes.

#to check what’s in the dataset we can run the below code
print (iris.data)

#to check all about the dataset we can run the below code
Print(iris.DESCR)

In Machine Learning, each row is known as observation and column is known as feature.

#The below command will let you know about the features of the IRIS dataset
print (iris.feature_names)
['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']

#The below command will let you know about the type of IRIS category
print (iris.target_names)
['setosa' 'versicolor' 'virginica']

Each value what we are predicting is the response or target or outcome. In machine learning features and responses are separate objects and it can be numeric too. All the features and responses must be in numpy format else it will not going to work. We can check the type of the dataset by using “type(iris.data)” command.

In this dataset, we have X axis which is nothing but the attributes and we have outcome which is nothing but the Y axis. So lets’ store the attributes in X and outcome in Y.

#Store the IRIS features in X
X = iris.data

#Store the IRIS outcome in Y
Y = iris.target

Stay tuned for the next post which will help to understand how to use this dataset to write your first machine learning algorithm and check it's accuracy score.

Click Here To Read Rest Of The Post...

Sunday, November 26, 2017

What is Perceptron in Deep Learning?


To get the better understanding of neural networks, we first need to understand what is perceptron? Perceptron is nothing but its a type of artificial neuron. Artificial neuron is nothing but it mimics our brain as explained in my previous post.

Let's understand what is perceptron and how does perceptron algorithm works. As per the below image we are adding some inputs and every input as specific weight attached to it. The weights define how important the input is. After multiplication all are summed together and send it to activation function. The reason for using activation function is to use the threshold value. If the signal is above that threshold value the neuron or perceptron will fire else it will not. There are different kind of activation functions like sigmoid, step function, sign function depending on use case. The main idea was to define an algorithm in order to learn the values of weights which are then multiplies with the input features in order to make a decision whether the neuron will fire or not. We can even call perceptron as single layer binary linear classifier because it is able to classify the inputs which are linearly separable.



Steps of Perceptron Learning Algorithm
1. Initialize the weights and threshold
2. Provide the input and calculate output
3. While training artificial neuron we have set of output values. So we know the output values. To test it, we give the input and check it’s output and match it with the existing output.
4. If the output doesn’t match with the given one, in that case we need to update the weights accordingly in order to reduce the loss.

Click Here To Read Rest Of The Post...

Thursday, November 23, 2017

What is neuron and artificial neuron in deep learning?


Deep learning is a form of machine learning that uses a model of computing that’s very much inspired by the structure of the brain.

With the help of deep learning, we are trying to mimic the human brain. We are trying to train the machine the way human brain thinks and works.

To get better understanding of how does machine learning works, we need to first understand how human brain works.

The brain works with the help of brain cells which we called as neurons. Neurons has dendrites which receive signals from other neurons. The cell body which is surrounded by the dendrites receive all the signals and summed them and forward these signals to axons. With the help of axons these signals are fired to the next neurons. The next neuron is present at some distance and that distance is known as synapse. It fires only to the next neuron when the signals coming from cell body exceeds a particular limit.

By copying the same process we can create artificial neuron and it does the same function as the brain does. Read more about Machine Learning Algorithms: When To Use Which Machine Learning Algorithm


Now the questions comes, if our brain can does all the work then we need artificial neuron. The advantage of having artificial neuron is that it can mimic our brain by getting training data. It never ever tires, it can handle multiple inputs without any causes and provide more likely accurate answers with lot of permutations and combinations which human brain can never does.

Click Here To Read Rest Of The Post...

Tuesday, November 21, 2017

All About Machine Learning Algorithms: When To Use Which Machine Learning Algorithm


People always find hard to understand which machine learning algorithm has to be applied on given problem. Today we have infinite number of problem and we might think that we have infinite number of solution available around it. But when we start classifying the problems in different categories we usually find that these infinite problems can fir in finite categories and infinite solutions change to finite solutions. In this post, will try to explain more on what is an Algorithm and under which circumstances we have to use which type of machine learning algorithm.

To tell a computer what it has to do, in that case you need to write a program. A program is nothing but a set of instructions in the form of some syntax. This syntax can be written in any programming language like Java, C, Python, Ruby on Rails etc. e.g if you have to write a program to print 1 to 20 numbers, in this case you can opt for any kind of programming language. The syntax will be different but the logic will remain be the same. Now the question comes what is Logic? Logic is nothing but it’s an algorithm. An algorithm is a step by stem procedure of solving a problem in computer world.

Machine learning focuses on the development of the computer programs that can change when exposed to new data. Learning comes from the past experience and accordingly Machine Learning tune to itself. Read More about types of Machine Learning

How Does Machine Learning Helps To Solve Any Kind Of Problem? The problem can be divided into different kind of category as mentioned below:- 1. Classification Problem: When we know that any problem is having set number of outputs during that time we can use classification algorithm. E.g. Differentiating between apple and oranges is a type of classification problem.
2. Anomaly Detection Problem: When a same type of input is given to system and if any deviations happen in that input in that case anomaly detection algorithm is used to detect that change. It analyzes a certain type of pattern and alerts whenever there is change in pattern.
3. Regression Problem: Whenever we want machine to give us a number as output during that time we have to apply the regression algorithm. It is used to calculate numeric values. E.g. what is the minimum investment should I put in to become millionaire in 10 years.
4. Clustering Problem: Trying to find what is the structure behind the given data sets during that time clustering algorithm is required. By understanding how data is organized you can better predict the behavior of particular event.
5. Reinforcement Problem: When decision has to be made in that case, reinforcement learning algorithm is applied.

Click Here To Read Rest Of The Post...

Sunday, November 19, 2017

Difference Between Artificial Intelligence, Machine Learning and Deep Learning


Artificial Intelligence is the capability of a machine to imitate intelligent human behavior. It is accomplished by studying how human brain things, learns, decide and work while trying to solve a problem. The main applications of Artificial Intelligence are Speech Recognition, Image Recognition, Self-Driving Cars, Self-Driving Networks, Siri, YouTube and Pandora. AI was first coined in 1956 but due to limitation of computation network it couldn’t be used that time.

After AI, Around 1990’s Machine Learning came into picture. Machine Learning in nothing but is type of Artificial Intelligence that provides computers with the ability to learn without being explicitly programmed. Machine Learning is of different types and can be found in the previous post.
Machine learning couldn’t fly high because of its below mentioned limitations:
1. Data with large number of inputs and outputs
2. High Dimensionality of data
3. It can solve NLP and Image Recognition up to some extent but not at deep level.
4. It doesn’t support feature extraction. Feature extraction is nothing but it’s a way to solve the problem without giving all the required inputs.

Click here To know more about Machine Learning Algorithms: When To Use Which Machine Learning Algorithm Deep Learning is subset of Machine Learning. It came into existence around 2005-2006 and the motive behind Deep learning is to overcome the existing problems of Machine Learning. Deep Learning is collection of statistical machine learning techniques used to learn feature hierarchies often based on artificial neural networks. Deep Learning models are capable to focus on the right features by themselves but requiring some little guidance from the programmer. These models also solve the dimensionality problem too. The main idea behind Deep Learning is to build learning algorithms that mimic brain. It is implemented with the help of neural network.

Click here to know about what is neuron in deep learning.

Click Here To Read Rest Of The Post...

Thursday, September 14, 2017

What is machine learning?


Machine learning is automated extraction of knowledge from Data. It is the way to automate your existing workflows with old mathematics theorem. At the end Machine Leaning is nothing but a programmatic way to solve any kind of problem.

The problem can be of predicting house prices, recognizing people from their photos, checking which interfaces of router will go down, predicting that the router link will be chocked, finding sales number basis on the investment etc.

As per Andrew NG from Coursera “The complexity in traditional computer programming is in the code (programs that people write). In machine learning, learning algorithms are in principle simple and the complexity (structure) is in the data. Is there a way that we can automatically learn that structure? That is what is at the heart of machine learning.

Types of Machine Learning
Supervised learning is also known as predictive modeling, is process of making predictions using data. It can apply what has been learned in the past to new data using labeled examples to predict the future events. The learning algorithm can also compare its output with the correct, intended output and find errors in order to modify the model accordingly. You make predictions of new data for which you don’t know the true outcomes. E.g. If dataset is email messages and by using predictive learning we will find out whether the particular email is spam or not.

Unsupervised learning is process of extracting structure from data or learning how to best represent the data. Unsupervised machine learning algorithms are used when the information used to train is neither classified nor labeled. Unsupervised learning studies how systems can infer a function to describe a hidden structure from unlabeled data. The system doesn’t figure out the right output, but it explores the data and can draw inferences from datasets to describe hidden structures from unlabeled data.


Click Here To Read Rest Of The Post...