Explore resources by topic or category
Browse by Category
Blog
Winter Sales Showdown: Black Friday Vs Cyber Monday Vs Green Monday
Cecilia Haynes
4 Mins
December 21, 2015
Blog
StartupChats: Embracing Remote Working for Success
Pablo Hoffman
< 1 Mins
July 17, 2015
Earlier this week, Scrapinghub was invited along with several other fully-distributed companies to participate in a remote working Q&A hosted by Startups Canada.
Blog
Github repository: Manage Vacations Distributed Team
Pablo Hoffman
3 Mins
June 8, 2015
Here at Zyte we are a remote team of 100+ engineers distributed among 30+ countries. As part of their standard contract, Zytebers get 20 vacation days per year and local country holidays off, and yet we spent almost zero time managing this. How do we do it?. The answer is “git” and here we explain how.
Blog
Gender Inequality Across Programming Languages
Pablo Hoffman
2 Mins
May 27, 2015
Gender inequality is a hot topic in the tech industry. Over the last several years we’ve gathered business profiles for our clients, and we realized this data would prove useful in identifying trends in how gender and employment relations to one another.
Blog
Traveling Tips for Remote Workers: Balancing Work and Adventure
Marie Moynihan
4 Mins
May 12, 2015
Being free to work from wherever you feel like, no boundaries holding you to a specific place or country.
Blog
A Career in Remote Working: Embracing Flexibility and Freedom
Marie Moynihan
3 Mins
April 28, 2015
From the beginning, Zyte has been a fully remote company, and now boasts over 100 employees working from all over the world, either from their homes or local coworking spaces.
Blog
Zyte: A Remote Working Success Story
Marie Moynihan
3 Mins
March 17, 2015
When Zyte came into the world in 2010, one thing we wanted was for it to be a company which could be powered by a global workforce, each individual working remotely from anywhere in the world.
Blog
Why MongoDB Is A Bad Choice For Storing Scraped Data
Shane Evans
4 Mins
May 13, 2013
MongoDB was used early on at Zyte to store scraped data because it's convenient. Scraped data is represented as (possibly nested) records which can be serialized to JSON.