Breaking the rule of thumb

In practice, you can get away with this rule and do learning with less than 10 times the number of features in your data; this mostly happens if your model is simple and you are using something called regularization (addressed in the next chapter).

Jake Vanderplas wrote an article (https://jakevdp.github.io/blog/2015/07/06/model-complexity-myth/) to show that one can learn even if the data has more parameters than examples. To demonstrate this, he used regularization.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset