serp.fast
← Glossary

Web Index

A web index is a searchable database of web pages that has been built by systematically crawling and processing the internet. When you type a query into Google, you are searching Google's web index — a structured representation of billions of pages that Google's crawlers have visited, downloaded, and analyzed. The index stores not just the text of each page but metadata like links, publication dates, structured data markup, and relevance signals. Building a web index is one of the most capital-intensive operations in technology. It requires crawling infrastructure that can fetch billions of pages, storage systems that can hold petabytes of content, and processing pipelines that can extract, deduplicate, and rank that content. Google's index is the largest, covering hundreds of billions of pages. Bing, Yandex, and Brave maintain their own independent indexes. Common Crawl provides an open, freely accessible index that is widely used for research and as training data for language models. For AI product builders, web indexes matter because they determine the breadth and freshness of the data your application can access. If you use a search API that queries Google's index, you inherit Google's coverage and ranking. If you use Brave Search API, you get results from Brave's independent 40-billion-page index. If you use Common Crawl, you get a static snapshot that may be weeks or months old but is free and unrestricted. The distinction between dependent and independent indexes is commercially significant. Many SERP APIs work by scraping Google's results, which means they are downstream of Google's index, subject to Google's ranking changes, and potentially exposed to legal risk. APIs built on independent indexes — Brave, Mojeek, Exa — own their data pipeline end to end. For AI applications that need reliable, long-term data access, the independence of the underlying index is a strategic consideration worth evaluating early.