Training a GAN model

Most machine learning models explained in earlier chapters are based on optimization, that is, we minimize the cost function over its parameter space. GANs are different because of two networks: the generator G and the discriminator D. Each has its own cost. An easy way to visualize GAN is the cost of the discriminator is the negative of the cost of the generator. In GAN, we can define a value function that the generator has to minimize and the discriminator has to maximize. The training process for a generative model is quite different from the supervised training method. GAN is sensitive to the initial weights. So we need to use batch normalization. Batch normalization makes the model stable, besides improving performance. Here, we train two models, the generative model and the discriminative model, simultaneously. Generative model G captures data distribution and discriminative model D estimates the probability of a sample that came from training data rather than G.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset