It's time to see some examples of unsupervised learning, given that we spend a majority of this book on supervised learning models.
There are many times when unsupervised learning can be appropriate. Some very common examples include the following:
The first tends to be the most common reason that data scientists choose to use unsupervised learning. This case arises frequently when we are working with data and we are not explicitly trying to predict any of the columns and we merely wish to find patterns of similar (and dissimilar) groups of points. The second option comes into play even if we are explicitly attempting to use a supervised model to predict a response variable. Sometimes simple EDA might not produce any clear patterns in the data in the few dimensions that humans can imagine where as a machine might pick up on data points behaving similarly to each other in greater dimensions.
The third common reason to use unsupervised learning is to extract new features from features that already exist. This process (lovingly called feature extraction) might produce features that can be used in a future supervised model or that can be used for presentation purposes (marketing or otherwise).