PINGDOM_CHECK

The Scrapy tutorial part IX: How To Run Scrapy Cloud Spiders

Developed by Pablo Hoffman and Shane Evans, Scrapy is an open-source python framework built specifically for web data extraction. With Scrapy spiders, you are able to download HTML, parse and process the data and save it in either CSV, JSON, or XML file formats.

This video shows how you can deploy, run and manage your crawlers in the cloud, using Scrapy Cloud.

  • How to use Shub, the Zyte client
  • How to fetch the scraped data from the cloud
  • Scrapy Cloud features

If you haven't yet, we recommend you to first watch part I, part II, part III, part IV, part V, part VI, part VII, and part VIII of our tutorial series.

If you like what you saw, we can recommend you to also check out the links below: