In centroid-based clustering technique, clusters are represented by a central vector. However, the vector itself may not necessarily be a member of the data points. In this type of learning, a number of the probable clusters must be provided prior to training the model. K-means is a very famous example of this learning type, where, if you set the number of clusters to a fixed integer to say K, the K-means algorithm provides a formal definition as an optimization problem, which is a separate problem to be resolved to find the K cluster centers and assign the data objects the nearest cluster center. In short, this is an optimization problem where the objective is to minimize the squared distances from the clusters.