Nikita Vostretsov
3 Mins
February 18, 2021

Scrapy update: Better broad crawl performance

When crawling the web, there’s always a speed limit. A spider can't fetch faster than the host willing to send the pages. Page serving takes some amount of resources - CPU, disk, network bandwidth, etc. These resources cost money. Unrestricted serving and extensive crawling are the worst combinations. Such a combination could bring applications to halt and deny service to users. Taking all this into account, limiting serving capacity is natural.

This article explains which Scrapy settings help you honor these limits and how to achieve better performance during broad crawls in the presence of these limits.

Problem statement

First of all, we need a way to differentiate entities behind domain names. The simplest and fastest one is "entities never share a single exact domain name". http://example1.com and http://example2.com are different, so do http://www.example.com and http://about.example.com. Another is "entities never share a single IP". Scrapy has to send DNS queries to resolve domain names to IP addresses before deciding. This solves the problem of different domain names served from a single host, so http://www.example.com and http://about.example.com are the same entity.

For every entity, there is a slot in a Downloader. The number of requests sent to each entity simultaneously is limited by CONCURRENT_REQUESTS_PER_DOMAIN or CONCURRENT_REQUESTS_PER_IP options. Which one to choose depends on the selected way to differentiate between entities. Such requests can be called active or running. Requests enqueued into Downloader and not sent to the host are called inactive. Every slot has a queue for inactive requests. After finishing an active request and before running one from an inactive queue, Scrapy waits for DOWNLOAD_DELAY seconds.

This approach requires tuning to keep a balance between performance and limits honoring. A less tuning-sensitive one is implemented in the AutoThrottle extension. Documentation provides a very good background and description of how it works.

Possible approaches

The downloader doesn't decide which requests are enqueued into it. It is done by Scheduler. The first implementation of such decision-making doesn't take entities into account. It works well in the case of crawling a specific entity. The situation is very different in the case of broad crawls. In broad crawls inactive queues for some slots are too long and active requests are not running at full capacity for others.

After presenting the concept of entities to a Scheduler, there are different strategies to select one for the next request.

Round-robin algorithm can be used for request scheduling:

  • store all entities in FIFO queue Q
  • when the next request should be scheduled pop one entity E1 (from the top)
  • issue a request to E1
  • push E1 back into Q (to the bottom)

This approach provides an equal flow of requests. For it to work well, every crawled entity should serve pages with the same speed. In the real world, it is not the case. Different hosts have different rendering times and network latencies are also different.

Let's look at what’s happening in such a situation.

In the end, the target is to keep the Downloader's queue of inactive requests as short as possible. Generic Computer Science algorithm doesn't work well in real-life conditions. A more task-specific approach needs to be presented.

Instead of using a model, real information can be used. Downloader knows the length of every queue. Providing such knowledge to a Scheduler solves the problem.

Experimental results in Scrapy

Both of these approaches were implemented in Scrapy. To select the fastest one, broadworm mode of scrapy-bench was used. In this mode, 1000 entities are emulated by 1000 domain aliases to the same server. This server introduces artificial delays on top of serving time. The result is presented in the table below.

AlgorithmItems per second crawledSpeedup
Entity-unaware2.341x
Round-robin7.563x
Ask the downloader23.1210x

Based on numbers it was decided to not preserve round-robin implementation inside Scrapy’s codebase, but you can still find it in the commit history.