Implementing independent component analysis (ICA)

Other useful dimensionality reduction techniques that are closely related to PCA are provided by scikit-learn, but not OpenCV. We mention them here for the sake of completeness. ICA performs the same mathematical steps as PCA, but it chooses the components of the decomposition to be as independent as possible from each other, rather than per predictor as in PCA.

In scikit-learn, ICA is available from the decomposition module:

In [9]:  from sklearn import decomposition
In [10]: ica = decomposition.FastICA(tol=0.005)
Why do we use tol=0.005? Because we want the FastICA to converge to some particular value. There are two methods to do that—increase the number of iterations (the default value is 200) or decrease the tolerance (the default value is 0.0001). I tried to increase the iterations but, unfortunately, it didn't work, so I went ahead with the other option. Can you figure out why it didn't converge? 

As seen before, the data transformation happens in the fit_transform function:

In [11]: X2 = ica.fit_transform(X)

In our case, plotting the rotated data leads to a similar result to that achieved with PCA earlier, as can be verified in the diagram that follows this code block.

In [12]: plt.figure(figsize=(10, 6))
... plt.plot(X2[:, 0], X2[:, 1], 'o')
... plt.xlabel('first independent component')
... plt.ylabel('second independent component')
... plt.axis([-0.2, 0.2, -0.2, 0.2])
Out[12]: [-0.2, 0.2, -0.2, 0.2]

This can be seen in the following diagram:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset