The k-nearest neighbors algorithm (k-NN) is a non-parametric method used for classification and regression in machine learning. This is the simplest and easiest algorithm which can be used to predict the outcome values. K-NN is also known as lazy learning, which stores the training data and waits until given a test tuple.

K-Nearest Neighbor Algorithm Has Below Mentioned Steps:

1. Pick a value of K

2. Search for K observations in the training data that are nearest to the measurements of the unknown data

3. Use the most popular response value from the K nearest neighbors as presicted response value for the unknown data or iris data in our example.

In the previous post we have loaded our first dataset which is IRIS. Now use the same dataset to train our model by using KNN neighbor algorithm and predict the values or outcomes. Click here to check which algorithm is used when.

Follow the code as explained in the previous post to load the IRIS dataset by using Scikit Learn.

# Below code will load the 4 attributes of IRIS in x_train_data object. The same can be verifies by using shape command

**x_train_data = iris.data**

**y_train_data_outcome = iris.target**

#Import KNN classifier from sklearn

**from sklearn.neighbors import KNeighborsClassifier**

#Define the neighbors value, I have used 1 you can try with 2, 3 4 or 5 and check the results.

**knn = KNeighborsClassifier(n_neighbors=1)**

**print(knn)**

#Training the KNN Classifier by loading test samples which is nothing but the 4 attributes of iris.data and 1 outcome of iris target

**knn.fit(x_train_data, y_train_data_outcome)**

#Define predict_species with 4 attributes

**predict_species = ([3,5,4,2])**

# Using Knn predict method to predict the species class.

**print(knn.predict([predict_species]))**

You can change the n_neighbors value and see what happens.

## Thursday, November 30, 2017

### Predicting Values By Using k-nearest Neighbors Machine Learning Algorithm

Subscribe to:
Post Comments (Atom)

## No comments:

Post a Comment