Choosing an optimal number for K and cluster validation

A big part of K-means clustering is knowing the optimal number of clusters. If we knew this number ahead of time, then that might defeat the purpose of even using unsupervised learning. So we need a way to evaluate the output of our cluster analysis.

The problem here is that because we are not performing any kind of prediction, we cannot gauge how right the algorithm is at predictions. Metrics such as accuracy and RMSE go right out of the window.

The Silhouette Coefficient

The Silhouette Coefficient is a common metric for evaluating clustering performance in situations when the true cluster assignments are not known.

A Silhouette Coefficient is calculated for each observation as follows:

The Silhouette Coefficient

Let's look a little closer at the specific features of this formula:

  • a: Mean distance to all other points in its cluster
  • b: Mean distance to all other points in the next nearest cluster

It ranges from -1 (worst) to 1 (best). A global score is calculated by taking the mean score for all observations. In general, a silhouette coefficient of 1 is preferred, while a score of -1 is not preferable:

# calculate Silhouette Coefficient for K=3
from sklearn import metrics
metrics.silhouette_score(X, km.labels_)
0.4578

Let's try calculating the coefficient for multiple values of K to find the best value:

# calculate SC for K=2 through K=19
k_range = range(2, 20)
scores = []
for k in k_range:
    km = KMeans(n_clusters=k, random_state=1)
    km.fit(X_scaled)
    scores.append(metrics.silhouette_score(X, km.labels_))

# plot the results
plt.plot(k_range, scores)
plt.xlabel('Number of clusters')
plt.ylabel('Silhouette Coefficient')
plt.grid(True)
The Silhouette Coefficient

So it looks like our optimal number of beer clusters is 4! This means that our k-means algorithm has determined that there seems to be four distinct types of beer.

K-means is a popular algorithm because of its computational efficiency and simple and intuitive nature. K-means, however, is highly scale dependent, and is not suitable for data with widely varying shapes and densities. There are ways to combat this issue by scaling data using scikit-learn's standard scalar:

# center and scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

# K-means with 3 clusters on scaled data
km = KMeans(n_clusters=3, random_state=1)
km.fit(X_scaled)

Easy!

Now let's take a look at the third reason to use unsupervised methods that falls under the third option in our reasons to use unsupervised methods, feature extraction.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset