Web Crawler
A web crawler — sometimes called a spider or bot — is a program that systematically discovers URLs and downloads pages. Crawlers start with a seed set of URLs, fetch each one, parse out the links it contains, queue those new URLs, and repeat until they have visited the entire intended scope. Search engines run the world's largest crawlers; Google's crawler visits billions of pages per day and is the source of Common Crawl, the public web archive that many AI training corpora derive from. Crawlers are distinguished from scrapers by intent and breadth. A crawler is built to discover and traverse — its job is to find URLs and produce an index. A scraper is built to extract specific data fields from known URLs. In practice, most production systems combine both: a crawler discovers product pages on an e-commerce site, then a scraper extracts price, title, and stock from each. For AI builders, the choice is usually: do I need a general crawler (Crawl4AI, Crawlee, Scrapy) or a specialized one bundled with extraction (Firecrawl, Apify)? The general crawlers give you maximum control and zero per-request cost but require operational work — proxy management, rate limiting, retry logic. The bundled platforms charge per page but handle the boring parts. Choose general crawlers when you have engineering capacity and steady volume; bundled platforms when you need to move fast.