We’ve made a change. Scrapinghub is now Zyte! 

Scrapy & Zyte Automatic Extraction API integration

time to read
By the one and only
October 15, 2019

We’ve just released a new open-source Scrapy middleware which makes it easy to integrate Zyte Automatic Extraction into your existing Scrapy spider. If you haven’t heard about Zyte Automatic Extraction (formerly AutoExtract) yet, it’s an AI-based web scraping tool that automatically extracts data from web pages without the need to write any code. Learn more about Zyte Automatic Extraction here.


This project uses Python 3.6+ and pip. A virtual environment is strongly encouraged.

$ pip install git+https://github.com/scrapinghub/scrapy-autoextract


Enable middleware

 'scrapy_autoextract.AutoExtractMiddleware': 543, }

This middleware should be the last one to be executed so make sure to give it the highest value.

Zyte Automatic Extraction settings


These settings must be defined in order for Zyte Automatic Extraction to work.

  • AUTOEXTRACT_USER: your Zyte Automatic Extraction API key
  • AUTOEXTRACT_PAGE_TYPE: the kind of data to be extracted (current options: "product" or "article")


  • AUTOEXTRACT_URL: Zyte Automatic Extraction service URL (default: autoextract.scrapinghub.com)
  • AUTOEXTRACT_TIMEOUT: response timeout from Zyte Automatic Extraction API (default: 660 seconds)


Zyte Automatic Extraction requests are opt-in and they must be enabled for each request, by adding:

meta['autoextract'] = {'enabled': True} 

If the request was sent to Zyte Automatic Extraction, inside your Scrapy spider you can access the result through the meta attribute:

def parse(self, response):
 yield response.meta['autoextract'] 


In the Scrapy settings file:

DOWNLOADER_MIDDLEWARES = { 'scrapy_autoextract.AutoExtractMiddleware': 543, } # Disable AutoThrottle middleware AUTHTHROTTLE_ENABLED = False AUTOEXTRACT_USER = 'my_autoextract_apikey' AUTOEXTRACT_PAGE_TYPE = 'article'

In the spider:

class ExampleSpider(Spider):
 name = 'example'
 start_urls = ['example.com']

 def start_requests(self):
 yield scrapy.Request(url, meta={'autoextract': {'enabled': True}}, callback=self.parse) 
 def parse(self, response):
 yield response.meta['autoextract'] 

Example output:

 "articleBody":"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat...",
 "description":"Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatu",
 "headline":"'Lorem Ipsum Dolor Sit Amet",
 "author":"Attila Toth",
 "articleBodyHtml":"<article>nn<p>Lorem ipsum...",


  • The incoming spider request is rendered by Zyte Automatic Extraction, not just downloaded by Scrapy, which can change the result - the IP is different, headers are different, etc.
  • Only GET requests are supported
  • Custom headers and cookies are not supported (i.e. Scrapy features to set them don't work)
  • Proxies are not supported (they would work incorrectly, sitting between Scrapy and Zyte Automatic Extraction, instead of Zyte Automatic Extraction and website)
  • AutoThrottle extension can work incorrectly for Zyte Automatic Extraction requests because timing can be much larger than the time required to download a page, so it's best to use AUTOTHROTTLE_ENABLED=False in the settings.
  • Redirects are handled by Zyte Automatic Extraction, not by Scrapy, so these kinds of middlewares might have no effect
  • Retries should be disabled because Zyte Automatic Extraction handles them internally (use RETRY_ENABLED=False in the settings) There is an exception if there are too many requests sent in a short amount of time and Zyte Automatic Extraction API returns HTTP code 429. For that case, it's best to use RETRY_HTTP_CODES=[429].

Check out the middleware on Github or learn more about Zyte Automatic Extraction (formerly AutoExtract)!

Written by Zyte team