WebKit's HTMLDocumentParser frequently must block parsing while waiting for a script or stylesheet to download. During this time, the HTMLPreloadScanner looks ahead in the source for subresource downloads which can be started speculatively. I've always assumed that discovering subresources sooner is a key factor in loading web pages efficiently, but until now, never had a good way to quantify it.
How effective is it?
Today I used Web Page Replay to test a build of Chromium with preload scanning disabled vs a stock build. The results were definitive. A sample of 43 URLs from Alexa's top 75 websites loaded on average in 1,086ms without the scanner and 879ms with it. That is a ~20% savings!
That number conceals some subtleties. The preload scanner has zero effect on highly optimized sites such as google.com and bing.com. In stark contrast, the preload scanner causes cnn.com, a subresource heavy site, to load fully twice as fast.
Why does this matter?
There is a lot of room for improvement in the preload scanner. These results tell me that it is worth spending time giving it some serious love. Some ideas:
- It doesn't detect iframes, @import stylesheet, fonts, HTML5 audio/video, and probably lots of other types of subresources.
- When blocked in the <head>, it doesn't scan the <body>.
- It doesn't work on xhtml pages (wikipedia is perhaps the most prominent example).
- The tokens it generates are not reused by the parser, so in many cases parsing is done twice.
- External stylesheets are not scanned until they are entirely downloaded. They could be scanned as data arrives like is done for the root document.
The test was performed with a simulated connection of 5Mbps download, 2Mbps upload, and a 40ms RTT time on OSX 10.6. The full data set is available.