Summary

In this chapter, you have learned about the two most important types of generative models: autoencoders and GANs. We first developed an autoencoder for MNIST images. We then used a similar architecture to encode credit card data and detect fraud. Afterward, we expanded the autoencoder to a VAE. This allowed us to learn distributions of encodings and generate new data that we could use for training.

Afterward, we learned about GANs, again first in the context of MNIST images and then in the context of credit card fraud. We used an SGAN to reduce the amount of data we needed to train our fraud detector. We used model outputs to reduce the amount of labeling necessary through active learning and smarter labeling interfaces.

We've also discussed and learned about latent spaces and the use they have for financial analysis. We saw the t-SNE algorithm and how it can be used to visualize higher dimensional (latent) data. You also got a first impression of how machine learning can solve game-theoretic optimization problems. GANs solve a minimax problem, which is frequent in economics and finance.

In the next chapter, we will deep dive into exactly that type of optimization as we cover reinforcement learning.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset