On 1 April 2012 13:01, David Gerard dgerard@gmail.com wrote:
On 1 April 2012 11:55, Petr Bena benapetr@gmail.com wrote:
I see no point in doing that. Https doesn't support caching well and is generally slower. There is no use for readers for that.
The use is that the requests themselves are encrypted, so that the only thing logged is that they went to Wikimedia. You did read the linked articles, right?
Obviously, I cannot confirm whether Mr Bena read the linked articles or not, but he did provide an answer regarding the technical restrictions.
Wikimedia already spends an incredible amount of time caching its content, because *so many* users use Wikipedia and its sister projects daily.
And since most of the content is fairly static, caching makes a lot of sense.
However, HTTPS does not support caching (at least not well), which means each page would suddenly have to be generated for *each* page. It's true that MediaWiki itself supports caching, but its own caching is no where near as fast as a caching server like Varnish (although I believe a less powerful caching server is used on Wikimedia's servers).
The trade off is that the service would be slower for everyone or we would need more servers. And I am not sure Wikimedia has that kind of money.
Those are the *technical* limitations to defaulting to HTTPS.