In this chapter, you have learned a number of practical tips for debugging and improving your model. Let's recap all of the things that we have looked at:
You now have a substantial number of tools in your toolbox that will help you run actual, practical machine learning projects and deploy them in real-life (for example, trading) applications.
Making sure your model works before deploying it is crucial and failure to properly scrutinize your model can cost you, your employer, or your clients millions of dollars. For these reasons, some firms are reluctant to deploy machine learning models into trading at all. They fear that they will never understand the models and thus won't be able to manage them in a production environment. Hopefully, this chapter alleviates that fear by showcasing some practical tools that can make models understandable, generalizable, and safe to deploy.
In the next chapter, we will look at a special, persistent, and dangerous problem associated with machine learning models: bias. Statistical models tend to fit to and amplify human biases. Financial institutions have to follow strict regulations to prevent them from being racially or gender biased. Our focus will be to see how we can detect and remove biases from our models in order to make them both fair and compliant.