Scrapy Cloud is a battle-tested cloud platform for running web crawlers (aka. spiders). Your spiders run in the cloud and scales as you need it. Think of it as a Heroku for web crawling.
As a student, you can start scraping the web in minutes, deploy code to Scrapy Cloud via your command line or directly with GitHub for free. Spiders will crawl forever, plus your data will be cached for 120 days.
Once you're set up, here are a few resources to help you get started. If you need some extra support, check out the Knowledge base.
Learn Scrapy (start here)Knowledge BaseAPI Documentation