Most systems "read online" by brute force. They spin up 50 nodes, shuffle terabytes across the network, and pray the optimizer doesn't choke. ADX does it differently. It leverages a proprietary indexing technology that is closer to a search engine (think Elasticsearch) than a traditional database (think Postgres), but with the aggregation power of a column-store.
Spark shuffles are the enemy of scalability. ADX uses a concept called extents (immutable compressed column segments). When you scale out, ADX doesn't reshuffle the world. It redistributes the metadata about those extents. The data stays put; the query logic moves to the data. This is why a single ADX cluster can handle 200 MB/s of sustained ingestion and still serve interactive queries. scalable data analytics with azure data explorer read online
We’ve been sold a comforting lie for the last decade. Most systems "read online" by brute force
If you haven't spent a weekend ingesting a billion log lines into ADX and running a summarize across them in under two seconds, you haven't yet understood what "scalable" actually means. It leverages a proprietary indexing technology that is
Azure Data Explorer succeeds because it indexes aggressively at ingest so it can ignore aggressively at query. When you "read online" in ADX, you aren't reading the data. You are reading the index of the index .
The Latency Lie: Why "Real-Time" Fails at Scale and How Azure Data Explorer Rewrites the Contract
The lie is this: "You can use your data lake for everything. Just add a little Spark, maybe a dash of Presto, and voilà—real-time analytics."