Scrapy Cloud

Focus on crawlers, not servers

Manage spiders, not servers.
Any number of spiders. Any team size.

Manage and automate spiders at scale.

Scrapy Cloud is a battle-tested cloud platform for running web crawlers (aka. spiders). Your spiders run in the cloud and scale on demand, from thousands to billions of pages. Think of it as a Heroku for web crawling.

Over 2,000 companies trust Scrapy Cloud to run their spiders

Make managing spiders a breeze

Run, monitor, and control your crawlers with Scrapy Cloud's easy-to-use web interface.

On-demand scaling

Increase the scale and firepower of your scraping operation with only a few clicks.

Easy integration

Seamlessly integrate Crawlera, Splash, Spidermon, etc. to your web scraping stack to take the hassle out of scraping the web at scale.

Full suite QA tools

Built in spider monitoring, logging and data QA tools. Along with easy integration of Spidermon, our open source spider monitoring framework.

Zero vendor lock-in

Develop your code using Scrapy, the most popular open-source web scraping framework, and retain the freedom to migrate it to any hosting solution.

Give it a try for free

Get access to the Scrapy Cloud free version today.

Start scraping the web in minutes

Deploy code to Scrapy Cloud via your command line or directly with GitHub.


pip install shub
shub login
API key: (api_key)
shub deploy (project_ID)


Deploy your code from scrapinghub to Github.
Connect to Github

Everything You Need Straight Out of the Box

Scrapy cloud has been specifically design for web scraping at scale

Real-time dashboard

Intelligent scheduling


Customisable containers

Crawlera integration


Elastic Pricing

Only pay for as much capacity as you need. Simply purchase capacity units, 1 concurrent crawl, when you need to scale up.


Ideal for small projects or if you simply want to give Scrapy Cloud a try.
Unlimited team members
Unlimited projects
Unlimited requests
1 hour crawl time
1 concurrent crawl
7 day data retention
Free Forever
Sign up now


Ideal for developers and companies who want a hassle free way to scrape the web at scale.
Unlimited team members
Unlimited projects
Unlimited requests
Unlimited crawl runtime
Unlimited concurrent crawls
Intelligent crawl scheduling
120 day data retention
Personalised suppport
Per Unit Per Month
Sign up now

1 Scrapy Unit = 1 GB of RAM and 1 concurrent crawl

With Scrapy Cloud, you control how you allocate your resources. Just update your settings to allocate 1 Scrapy Cloud unit to each job.

Compare plans  to find the right solution for your needs or talk to one of our solution architects today.

Live chat
  • Provides a simple way to run your crawls and browse results
    "Scrapy is really pleasant to work with. It hides most of the complexity of web crawling, letting you focus on the primary work of data extraction.

    Scrapinghub provides a simple way to run your crawls and browse results, which is especially useful for larger projects with multiple developers."

    Jacob Perkins
  • Does not force vendor lock-in
    "I love that Scrapy Cloud does not force vendor lock-in, unlike the other scraping and crawling services. Investment developing the right scraping logic is not stuck in some proprietary format or jailed behind some user friendly interface. With Scrapy Cloud scraping logic is in standard Python code calling the open-source Scrapy Python library. You retain the freedom to run the scraping Python code on your own computers or someone else’s servers."
    Castedo Ellerman
    Quantitative Analyst/Developer

Need data you can rely on?

Tell us about your project or start using our scraping tools today.

© 2010 - 2020 Scrapinghub

bug linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram