Summary

In this chapter, you learned about a wide range of conventional tools for dealing with time series data. You also learned about one-dimensional convolution and recurrent architectures, and finally, you learned a simple way to get your models to express uncertainty.

Time series are the most iconic form of financial data. This chapter has given you a rich toolbox for dealing with time series. Let's recap all of the things that we've covered on the example of forecasting web traffic for Wikipedia:

  • Basic data exploration to understand what we are dealing with
  • Fourier transformation and autocorrelation as tools for feature engineering and understanding data
  • Using a simple median forecast as a baseline and sanity check
  • Understanding and using ARIMA and Kalman filters as classic prediction models
  • Designing features, including building a data loading mechanism for all our time series
  • Using one-dimensional convolutions and variants such as causal convolutions and dilated convolutions
  • Understanding the purpose and use of RNNs and their more powerful variant, LSTMs
  • Getting to grips with understanding how to add uncertainty to our forecasts with the dropout trick, taking our first step into Bayesian learning

This rich toolbox of time series techniques comes in especially handy in the next chapter, where we will cover natural language processing. Language is basically a sequence, or time series, of words. This means we can reuse many tools from time series modeling for natural language processing.

In the next chapter, you will learn how to find company names in text, how to group text by topic, and even how to translate text using neural networks.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset