Semi-supervised learning and GAN

So for, we have seen how GAN can be used to generate realistic images. In this section, we will see how GAN can be used for classification tasks where we have less labeled data but still want to improve the accuracy of the classifier. Here we will also use the same Street View House Number or SVHN dataset to classify images. As previously, here we also have two networks, the generator G and discriminator D. In this case, the discriminator is trained to become a classifier. Another change is that the output of the discriminator goes to a softmax function instead of a sigmoid function, as seen earlier. The softmax function returns the probability distribution over labels:

Now we model the network as:

total cost = cost of labeled data + cost of unlabeled data

To get the cost of labeled data, we can use the cross_entropy function:

cost of labeled data  = cross_entropy ( logits, labels)
cost of unlabeled data = cross_entropy ( logits, real)

Then we can calculate the sum of all classes:

real prob = sum (softmax(real_classes))

Normal classifiers work on labeled data. However, semi-supervised GAN-based classifiers work on labeled data, real unlabeled data, and fake images. This works very well, that is, there are less classification errors even though we have less labeled data in the training process.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset