On Sun, Jul 12, 2009 at 2:43 PM, William Allen Simpsonwilliam.allen.simpson@gmail.com wrote:
OK, I've looked. I'm certainly no expert in hand editing html, although I've done more than enough over the years, but I just don't see the problem that's being solved.
Many/most pages already serve up more than 32K. You're proposing a tiny savings of fractional percentages in bytes, all so it's more legible to humans that never actually see it and aren't about to edit this stuff.
Some humans do see it, namely, developers and similar sorts. People writing CSS and JS, for instance. There's value in readable code for debugging purposes, all else being equal.
You know I've agreed with you more often than not over the years, and I've never cared much about screen scraping bots after the API worked, but is this really worth the effort?
It wouldn't be much effort, especially over time.
I'm of the opinion that compatibility with old browsers is much more important than human readability.
This is much less likely to introduce compatibility problems than many other changes we make, e.g., to CSS or JS. Parsing of HTML has been pretty uniform for years except in edge cases. There have been no new features since HTML 4, after all.
Do you have copies of W98 and W2K to regression test against?
Unnecessary. It's seriously unlikely we have acceptable support for any browser before IE5 anyway, and that can be run on modern systems (I use ies4linux). In practice, we haven't gone out of our way to support browsers older than IE6 for a long time now. If someone brings up an issue, we'll consider fixing it, but we're not proactively hunting down browsers that old and trying to work around their bugs. Nobody is; cost-benefit just isn't reasonable.
In fact, I find that the Wikipedia main page is almost completely unreadable in IE5 already. I have never seen a single complaint about that, not even once.