Analysts agree! http://www.rightscale.com/blog/cloud-cost-analysis/cloud-cost-analysis-how-m...
_<
- d.
Am I wrong or did they actually calculate that for labs only (which would be rather funny)? At least they link to https://wikitech.wikimedia.org/wiki/Special:Ask/-5B-5BResource-20Type::insta... ("[...] that run on up to 385 instances [...]") which AFAIK doesn't have any production servers.
Cheers,
Marius
On Wed, 2013-08-21 at 15:44 +0100, David Gerard wrote:
Analysts agree! http://www.rightscale.com/blog/cloud-cost-analysis/cloud-cost-analysis-how-m...
_<
- d.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
This very article seems like yet another "Hey check out my brand new Wikipedia redesign!" story. No, you're not wrong. They took the170/385 numbers from the Labs stats.
On Wed, Aug 21, 2013 at 6:12 PM, hoo hoo@online.de wrote:
Am I wrong or did they actually calculate that for labs only (which would be rather funny)? At least they link to https://wikitech.wikimedia.org/wiki/Special:Ask/-5B-5BResource-20Type::insta... ("[...] that run on up to 385 instances [...]") which AFAIK doesn't have any production servers.
Cheers,
Marius
On Wed, 2013-08-21 at 15:44 +0100, David Gerard wrote:
Analysts agree! http://www.rightscale.com/blog/cloud-cost-analysis/cloud-cost-analysis-how-m...
_<
- d.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On 21 August 2013 16:12, hoo hoo@online.de wrote:
Am I wrong or did they actually calculate that for labs only (which would be rather funny)? At least they link to https://wikitech.wikimedia.org/wiki/Special:Ask/-5B-5BResource-20Type::insta... ("[...] that run on up to 385 instances [...]") which AFAIK doesn't have any production servers.
Heh. Please do post a comment of correction and post it here too, so it doesn't just vanish ;-)
- d.
Sadly they're moderating comments. I tweeted at the author, with links to WMF Ganglia as backup, and he definitely doesn't believe me; maybe something from a WMFer would help, if anyone thinks it's worth correcting: https://twitter.com/hassankhosseini/status/370090365354655744
Going by the Ganglia pages, actual Wikipedia has at lesat >2x the *RAM* that their scenario has *disk*. Pretty fun. (If you're curious, Ganglia's front page says it's tracking 14,744 cores on 988 hosts and 40T of RAM. Their scenario has <20T disk. There may be additional capacity not in that Ganglia setup, though it seemed to cover the obvious stuff.)
On Wed, Aug 21, 2013 at 8:23 AM, David Gerard dgerard@gmail.com wrote:
On 21 August 2013 16:12, hoo hoo@online.de wrote:
Am I wrong or did they actually calculate that for labs only (which would be rather funny)? At least they link to
https://wikitech.wikimedia.org/wiki/Special:Ask/-5B-5BResource-20Type::insta......] that run on up to 385 instances [...]") which AFAIK doesn't have any production servers.
Heh. Please do post a comment of correction and post it here too, so it doesn't just vanish ;-)
- d.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
https://twitter.com/hassankhosseini/status/370212655996235776 - he says that this was produced in collaboration with Wikimedian John Vandenberg (CC'ed), who might be able to provide more information on how the numbers were generated.
On Wed, Aug 21, 2013 at 9:02 AM, Randall Farmer randall@wawd.com wrote:
Sadly they're moderating comments. I tweeted at the author, with links to WMF Ganglia as backup, and he definitely doesn't believe me; maybe something from a WMFer would help, if anyone thinks it's worth correcting: https://twitter.com/hassankhosseini/status/370090365354655744
Going by the Ganglia pages, actual Wikipedia has at lesat >2x the *RAM* that their scenario has *disk*. Pretty fun. (If you're curious, Ganglia's front page says it's tracking 14,744 cores on 988 hosts and 40T of RAM. Their scenario has <20T disk. There may be additional capacity not in that Ganglia setup, though it seemed to cover the obvious stuff.)
On Wed, Aug 21, 2013 at 8:23 AM, David Gerard dgerard@gmail.com wrote:
On 21 August 2013 16:12, hoo hoo@online.de wrote:
Am I wrong or did they actually calculate that for labs only (which would be rather funny)? At least they link to
https://wikitech.wikimedia.org/wiki/Special:Ask/-5B-5BResource-20Type::insta......] that run on up to 385 instances [...]") which AFAIK doesn't have any production servers.
Heh. Please do post a comment of correction and post it here too, so it doesn't just vanish ;-)
- d.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Cloud Company posts blog about how you should move to the cloud; also, water is wet, the sky is blue, more at 11.
On Wed, Aug 21, 2013 at 11:16 AM, Tilman Bayer tbayer@wikimedia.org wrote:
https://twitter.com/hassankhosseini/status/370212655996235776 - he says that this was produced in collaboration with Wikimedian John Vandenberg (CC'ed), who might be able to provide more information on how the numbers were generated.
On Wed, Aug 21, 2013 at 9:02 AM, Randall Farmer randall@wawd.com wrote:
Sadly they're moderating comments. I tweeted at the author, with links to WMF Ganglia as backup, and he definitely doesn't believe me; maybe something from a WMFer would help, if anyone thinks it's worth
correcting:
https://twitter.com/hassankhosseini/status/370090365354655744
Going by the Ganglia pages, actual Wikipedia has at lesat >2x the *RAM* that their scenario has *disk*. Pretty fun. (If you're curious, Ganglia's front page says it's tracking 14,744 cores on 988 hosts and 40T of RAM. Their scenario has <20T disk. There may be additional capacity not in
that
Ganglia setup, though it seemed to cover the obvious stuff.)
On Wed, Aug 21, 2013 at 8:23 AM, David Gerard dgerard@gmail.com wrote:
On 21 August 2013 16:12, hoo hoo@online.de wrote:
Am I wrong or did they actually calculate that for labs only (which would be rather funny)? At least they link to
https://wikitech.wikimedia.org/wiki/Special:Ask/-5B-5BResource-20Type::insta......] that run on up to 385 instances [...]") which AFAIK doesn't have
any production servers.
Heh. Please do post a comment of correction and post it here too, so it doesn't just vanish ;-)
- d.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
-- Tilman Bayer Senior Operations Analyst (Movement Communications) Wikimedia Foundation IRC (Freenode): HaeB
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On Wed, Aug 21, 2013 at 9:16 AM, Tilman Bayer tbayer@wikimedia.org wrote:
https://twitter.com/hassankhosseini/status/370212655996235776 - he says that this was produced in collaboration with Wikimedian John Vandenberg (CC'ed), who might be able to provide more information on how the numbers were generated.
For some reason, that tweet seems to have been deleted within the last hour, but there is some earlier discussion from that collaboration at https://twitter.com/hassankhosseini/status/283944238473936897 , with other interesting comments ("Running 450 servers for Wikipedia, 150 staff. On AWS you could run that with 2-4 sysadmins")
On 08/21/2013 01:24 PM, Tilman Bayer wrote:
"Running 450 servers for Wikipedia, 150 staff. On AWS you could run that with 2-4 sysadmins"
Obvious troll is obvious.
Anyone who says something this with a straight face is either insane or has absolutely no idea what they're talking about. The one thing that always struck me the most about the Wikimedia projects is how incredible that a top-10 website could survive which such a minuscule staff.
Mind you, we accumulated a technical debt by being so lean for so long, and are only now catching up thanks to the focus on shoring up engineering in the past two years.
-- Marc
Threw my title around, see if that helps. :)
<quote name="Randall Farmer" date="2013-08-21" time="09:02:01 -0700">
Sadly they're moderating comments. I tweeted at the author, with links to WMF Ganglia as backup, and he definitely doesn't believe me; maybe something from a WMFer would help, if anyone thinks it's worth correcting: https://twitter.com/hassankhosseini/status/370090365354655744
Going by the Ganglia pages, actual Wikipedia has at lesat >2x the *RAM* that their scenario has *disk*. Pretty fun. (If you're curious, Ganglia's front page says it's tracking 14,744 cores on 988 hosts and 40T of RAM. Their scenario has <20T disk. There may be additional capacity not in that Ganglia setup, though it seemed to cover the obvious stuff.)
On Wed, Aug 21, 2013 at 8:23 AM, David Gerard dgerard@gmail.com wrote:
On 21 August 2013 16:12, hoo hoo@online.de wrote:
Am I wrong or did they actually calculate that for labs only (which would be rather funny)? At least they link to
https://wikitech.wikimedia.org/wiki/Special:Ask/-5B-5BResource-20Type::insta......] that run on up to 385 instances [...]") which AFAIK doesn't have any production servers.
Heh. Please do post a comment of correction and post it here too, so it doesn't just vanish ;-)
- d.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wiadomość napisana przez Randall Farmer randall@wawd.com w dniu 21 sie 2013, o godz. 18:02:
Sadly they're moderating comments. I tweeted at the author, with links to WMF Ganglia as backup, and he definitely doesn't believe me; maybe something from a WMFer would help, if anyone thinks it's worth correcting: https://twitter.com/hassankhosseini/status/370090365354655744
Going by the Ganglia pages, actual Wikipedia has at lesat >2x the *RAM* that their scenario has *disk*. Pretty fun. (If you're curious, Ganglia's front page says it's tracking 14,744 cores on 988 hosts and 40T of RAM. Their scenario has <20T disk. There may be additional capacity not in that Ganglia setup, though it seemed to cover the obvious stuff.)
On Wed, Aug 21, 2013 at 8:23 AM, David Gerard dgerard@gmail.com wrote:
On 21 August 2013 16:12, hoo hoo@online.de wrote:
Am I wrong or did they actually calculate that for labs only (which would be rather funny)? At least they link to
https://wikitech.wikimedia.org/wiki/Special:Ask/-5B-5BResource-20Type::insta......] that run on up to 385 instances [...]") which AFAIK doesn't have any production servers.
Heh. Please do post a comment of correction and post it here too, so it doesn't just vanish ;-)
- d.
Now there is an update [0] that says: "We learned today that the data set we used for this post might not be correct.". No it's not like they've taken the wrong data - the data they've taken is not correct! Shame on you data!
Michał
[0] http://www.rightscale.com/blog/cloud-cost-analysis/cloud-cost-analysis-how-m...
Annother recent publication that similarly used Wikipedia as an example to simulate the alleged benefits of a different hosting model: http://www.researchgate.net/publication/236942031_Symbiotic_Coupling_of_P2P_... (covered in the July Wikimedia Research Newsletter)
It's by two German computer scientists who conclude that the Wikimedia Foundation "can reduce the traffic needed for article lookups in case of Wikipedia up to 72%" by having participants in a P2P network storing and serving some articles from their machines, while still also serving them from a central installation (cloud). I seem to recall that this kind of proposal for Wikipedia was quite popular in like the mid 2000s (when P2P was more in fashion and WMF had less money), to the point that Brion or Tim or someone else involved with the actual hosting wrote a rebuttal, which I can't find any more.
On Wed, Aug 21, 2013 at 7:44 AM, David Gerard dgerard@gmail.com wrote:
Analysts agree! http://www.rightscale.com/blog/cloud-cost-analysis/cloud-cost-analysis-how-m...
_<
- d.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
----- Original Message -----
From: "David Gerard" dgerard@gmail.com
http://www.rightscale.com/blog/cloud-cost-analysis/cloud-cost-analysis-how-m...
How many machines do we have right now? Couple hundred?
What's a Win2008 server license going for?
What percentage of our budget is that, anyway? 50?
Cheers, -- jra
On Wed, Aug 21, 2013 at 7:19 PM, Jay Ashworth jra@baylink.com wrote:
----- Original Message -----
From: "David Gerard" dgerard@gmail.com
http://www.rightscale.com/blog/cloud-cost-analysis/cloud-cost-analysis-how-m...
How many machines do we have right now? Couple hundred?
What's a Win2008 server license going for?
What percentage of our budget is that, anyway? 50?
Cheers, -- jra
Without going in to all the ridiculousness posted by this silly blog, do note that windows azure offers Linux VM's. This article is not actually about a Windows infrastructure.
-- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII St Petersburg FL USA #natog +1 727 647 1274
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
wikitech-l@lists.wikimedia.org