Summary

In this chapter, we walked through a variety of ways to scrape data from a web page. Regular expressions can be useful for a one-off scrape or to avoid the overhead of parsing the entire web page, and BeautifulSoup provides a high-level interface while avoiding any difficult dependencies. However, in general, lxml will be the best choice because of its speed and extensive functionality, so we will use it in future examples.

We also learned how to inspect HTML pages using browser tools and the console and define CSS selectors and XPath selectors to match and extract content from the downloaded pages.

In the next chapter we will introduce caching, which allows us to save web pages so they only need be downloaded the first time a crawler is run.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset