It will be on the Wikimedia IRC Network at irc.wikimedia.org - NOT Freenode.
This is where all the wikis currently have feeds through the rc bot.
Kind regards,
E
English Wikipedia
IRC: TheLetterE
Email: e.wikipedia(a)gmail.com
--------------------------------------------------
From: "Michael Bimmler" <mbimmler(a)gmail.com>
Sent: Sunday, January 13, 2008 10:49 PM
To: "Wikimedia Foundation Mailing List" <foundation-l(a)lists.wikimedia.org>
Cc: "Wikimania general list (open subscription)"
<wikimania-l(a)lists.wikimedia.org>
Subject: Re: [Wikimania-l] [Foundation-l] Wikimedia IRC Network - NewChannel
(#wikimania2008.wikimedia)
> On Jan 13, 2008 1:26 PM, E <e.wikipedia(a)gmail.com> wrote:
>> Hello all,
>>
>> Seeing I don't really know where to post this (on foundation-l or
>> wikitech-l), I have decided to post it here and someone can hopefully
>> pass it on to the person that it may concern.
>>
>> As always, we have RC channels on irc.wikimedia.org for every wiki, and
>> it seems that the #wikimania2008.wikimedia channel has not yet been
>> created for the feed from http://wikimania2008.wikimedia.org.
>
> Per Freenode policy, this would probably need to be called
> #wikimedia-wikimania2008 (did you check whether that one already
> exists?)
>
> Michael
>
> _______________________________________________
> Wikimania-l mailing list
> Wikimania-l(a)lists.wikimedia.org
> http://lists.wikimedia.org/mailman/listinfo/wikimania-l
Hello colleagues and shareholders (community :)!
Has been a while since my last review of operations (aka hosting
report) - so I will try to overview some of things we've been doing =)
First of all, I'd like to thank mr.Moore for his fabulous law. It
allowed Wikipedia to stay alive - even though we had to grow again in
all directions.
We still have Septembers. Well, it is a nice name to describe the
recurring pattern, which provides Shock and Awe to us - after a
period of stable usage, every autumn number of users suddenly goes up
and stays there - to allow us think we've finally reached some
saturation and will never grow more. Until next September.
We still have World Events. People rush to us to read about conflicts
and tragedies, joys and celebrations. Sometimes because we had
information for ages, sometimes because it all matured in seconds or
minutes. Nowhere else document can require that much of concurrent
collaboration, and nowhere else it can provide as much value
immediately.
We still have history. From day one of the project, we can see people
going into dramas, discussing, evolving and revolving every idea on
the site. Every edit stays there - accumulating not only final pieces
of information, but the whole process of assembling the content.
We still advance. Tools to facilitate the community get more complex,
we start growing ecosystem of tools and processes inside and outside
core software and platform. Users are the actual developers of the
project, core technology just lags behind assisting.
Our operation becomes more and more demanding - and thats quite a bit
of work to handle.
Ok, enough of such poetic introduction :)
== Growth ==
Over second half of 2006 traffic and reqeuests to our cluster doubled
(actually, that happened just in few months)
Over 2007 traffic and requests to our cluster doubled.
Pics:
http://www.nedworks.org/~mark/reqstats/trafficstats-yearly.pnghttp://www.nedworks.org/~mark/reqstats/reqstats-yearly.png
== Hardware expansion ==
Back in September 2006 we had quite huge load increase, and we went
for capacity expansion, which included:
* 20 new Squid servers ($66k)
* 2 storage servers ($24k)
* 60 application servers ($232k)
German foundation additionally assisted with purchasing 15 Squid
servers in November for Amsterdam facility.
Later in January 2007 we added 6 more database servers (for $39k),
three additional application servers for auxiliary tasks (such as
mail), and some network and datacenter gear.
The growth over autumn/winter led us to quite big ($240k) capacity
expansion back in March, which included:
* 36 very capable 8-core application servers (thank you Moore yet
again :) - that was around $120k
* 20 Squid servers for Tampa facility
* Router for Amsterdam facility
* Additional networking gear (switches, linecards, etc) for Tampa
The only serious capacity increase afterwards was another
'German' (thanks yet again, Verein) batch of 15 Squid servers for
Amsterdam in December 2007.
We do plan to improve on database and storage servers soon - that
would add to stability of our dumps building and processing, as well
as better support for various batch jobs.
We have been especially pushy about exploiting warranties on all
servers, and nearly all machines ever purchased are in working state,
doing one or another kind of workload. All the veterans of 2005 are
still running at amazing speeds doing the important jobs :)
Rob joining to help us with datacenter operations has allowed to have
really nice turnarounds with pretty much every datacenter work - as
volunteer remote hands became not available during critical moments
anymore. Oh, and look how tidy cabling is: http://flickr.com/photos/
midom/2134991985/ !
== Networking ==
This has been mainly in capable Mark's and River's hands - where we
underwent transition from hosting customer to internet service
provider (or at least - equal peer to ISPs) ourselves. We have our
independent autonomous systems both in Europe and US - allowing to
pick best available connectivity options, resolve routing glitches,
and get free traffic peering at internet exchanges. That provides
quite lots of flexibility, of course, at the cost of more work and
skills required.
This is also part of overall well-managed powerful datacenter
strategy. Instead of low-efficiency small datacenters scattered
around the world, core facility like one in Amsterdam provides high
availability, close proximity to major Internet hubs and carriers,
and is generally in center of region's inter-tubes. Though it would
be possible to reach out into multiple donated hosting places, that
would just lead to slower service for our users, and someone would
still have to pay for the bandwidth. As we are pushing nearly 4 Gbps
of traffic, there're not much donors who wouldn't feel such traffic.
== Software ==
There has been lots of overall engineering effort, that was often
behind the scenes. Various bits had to be rewritten to act properly
on user activity. The most prominent example of such work is Tim's
rewrite of parser to more efficiently handle huge template
hierarchies. In perfect case, users will not see any visible change,
except multiple-factor faster performance at expensive operations.
In past year, lots of activities - how people use customized software
- bots, javascript extensions, etc - have changed performance
profile, and nowadays lots of performance work at backend is to
handle various fresh activities - and anomalies.
One of core activities was polishing caching of our content, so we
could have our application layer to concentrate on most important
process - collaboration, instead of content delivery.
Lots and lots of small things have been added or fixed - though some
developments where quite demanding - like multimedia integration,
which was challenging due to our freedom requirements.
Still, there was constant tradeoff management, as not every feature
was worth the performance sacrifice and costs, and on the other hand
- having the best possible software for collaboration is also
important :) Introducing new features, or migrating them from outside
to the core platform has been always serious engineering effort.
Besides, there would be quite a lot of communication - explaining how
things have to be built for them not to collapse at live site,
discussing security implications, change of usage patterns, ...
Of course, MediaWiki is still one of most actively developed web
software - and here Brion and Tim lead the volunteers, as well, as
spend their days and nights in the code.
At the overall stack, we have worked at every layer - tuning kernels
for our high-performance networking, experimenting with database
software (some servers are running our own fork of MySQL, based on
Google changes), perfecting Squid (Mark and Tim ended up in authors
list) - our web caching software, digging into problems and
specialties of PHP engine. Quite a lot of problems we hit are very
huge-site-specific, and even if other huge shops hit them, we're the
ones that are always free to release our changes and fixes. Still,
colleagues from other shops are willing to assist us too :)
There were lots of tiny architecture tweaks - that allowed us to use
resources more efficiently, but none of them are any major - pure
engineering all the time. It seems, that lately we stabilized on lots
of things in how Wikipedia works - and it seems to work quite
fluently. Of course, one must mention Jens' keen eye, taking care of
various especially important but easily overlooked things.
River has dedicated lots of attention to supporting the community
tools infrastructure at the Toolserver - and also maintaining off-
site copies of projects.
Site doesn't fall down the very same minute nobody is looking at it,
and it is quite an improvement over the years :)
== Notes ==
People have been discussing if running a popular site is really a
mission of WMF. Well, users created magnificent resource, we try to
support it, we do what we can. Thanks to everyone involved - though
it has been far less stressful ride than previous years, still, nice
work. ;-)
== More reading ==
May hurt your eyes: https://wikitech.leuksman.com/view/Server_admin_log
Platform description: http://dammit.lt/uc/workbook2007.pdf
== Disclaimer ==
Some numbers can be wrong, as this review was based not on audit, but
on vague memories :)
--
Domas Mituzas -- http://dammit.lt/ -- [[user:midom]]
Any toolserver users want to take this one on? It's fabulously useful.
- d.
---------- Forwarded message ----------
From: Erik Moeller <erik(a)wikimedia.org>
Date: 11 Jan 2008 11:38
Subject: [Commons-l] Maintainer for FlickrLickr wanted
To: Wikimedia Commons Discussion List <commons-l(a)lists.wikimedia.org>
All,
I won't realistically be able to continue to maintain the FlickrLickr bot:
http://commons.wikimedia.org/wiki/User:FlickrLickr
The bot consists essentially of two components:
- a Perl command-line script to fill a MySQL database with information
from Flickr about freely licensed (CC-BY) photos;
- a Perl CGI script to let users choose photos they want to see
uploaded to Commons.
Both these scripts are PD code.
I'll note that I haven't run the database updater for a while, and if
the Flickr API has changed, it may need some fixes.
I'd be happy to provide some initial assistance with setup & use, but
you'd have to find your own hosting, or migrate it to the toolserver.
The script doesn't cause terrible load, but it does create local
copies of the images downloaded from Flickr.
FlickrLickr doesn't currently have a user registration process; I add
new reviewers by hand to the MySQL database. It would probably be wise
to change that, since much of my time was spent authorizing users &
reviewing their work.
It's a pretty powerful tool: Some 10K images have been uploaded
through the FlickrLickr review process.
Is anyone interested in running & maintaining the script?
Best,
Erik
_______________________________________________
Commons-l mailing list
Commons-l(a)lists.wikimedia.org
http://lists.wikimedia.org/mailman/listinfo/commons-l
midom(a)svn.wikimedia.org wrote:
> + /* Array of caching hints for ParserCache */
> + static public $mCacheTTLs = array (
> + 'currentmonth' => 86400,
> + 'currentmonthname' => 86400,
> + 'currentmonthnamegen' => 86400,
> + 'currentmonthabbrev' => 86400,
A clever hack for some of these might be to have the expiry vary based
on the expected change, where it's predictable.
For instance if we're dependent on a month change, and it's 18:00 on the
31st of the month, the value will change in just 21600 seconds. A
"smarter" cache could then expire it early, so the date flips over on
time instead of 18 hours late.
(In theory it should be possible to have the HTTP caching expiry match
this too, though I'm not sure if that'll interact properly with our
current cache header rewrites.)
-- brion vibber (brion @ wikimedia.org)
Hello,
Making translations of the sitenotice at MediaWiki:Sitenotice/XX does
not work (only MW:Sitenotice ever gets used, regardless of the
interface language) because of caching.
http://bugzilla.wikimedia.org/show_bug.cgi?id=8280
The recent fundraiser somehow had localised sitenotices. Is it
possible for Commons to use this? (just on Commons)
thanks,
Brianna
--
They've just been waiting in a mountain for the right moment:
http://modernthings.org/
A beloved user on my wiki has mistakenly edited while not logged in,
leaving his company's IP address in the history of A_Sensitive_Article
and RecentChanges, etc.
(Next time the warning should be more noticeable.
http://bugzilla.wikimedia.org/show_bug.cgi?id=12474 )
Anyway, he is begging me to expunge his IP... What is the best way?
One IP, three pages, all of which are now old revisions.
I'm looking around maintenance/*. I see a lot of dangerous looking
programs.
Hmmm deleteRevision.php, reassignEdits.php, look interesting.
But of course as with all maintenance/* programs, it is best to ask
here on this mailing list first as they might leave the database in
tatters.
By the way, while we are on the subject, let's say I have edited out a
sensitive URL, comment, etc. from the contents of one of my pages. But
that's not enough. It can still be read from the article's history.
Shall I use deleteRevision.php? (Kindly don't tell me to install
extensions, let's see how far we can get using just maintenance/*.
Hmmm, perhaps let's make a new page:
http://www.mediawiki.org/wiki/Manual:Removing_embarrassment )
I'm creating a publically accessible academic knowledge base and it would be
nice to be able to add extra content depending on who is viewing. For
example, members who are logged in will see extra content that anonymous
users will not have access to. They will both be able to view the same page
and read most of the content, but if you want to read all of the content you
have to log in.
A much more complex version would add content to the page depending on what
usergroup you were in, what your location is, etc. For instance users from
Indiana would see slightly different content than users that were from
Massachusetts. I'm sure this could be done with templates it would simply
require that the permission of a certain template would be impact the
template and only the template. Every time I try to insert a permission
into the template it seems to affect the permissions of the entire page.
Thanks
-JN
You could create a usergroup and add everyone that is authorized to it.
To prevent anyone from viewing a page, use the following code in
localsettings.php:
$wgGroupPermissions['*' ]['read'] = false;
$wgGroupPermissions['*' ]['edit'] = false;
$wgGroupPermissions['*' ]['createpage'] = false;
$wgGroupPermissions['*' ]['createtalk'] = false;
Now to prevent people that are logged in from doing stuff, also add the
following code:
$wgGroupPermissions['user' ]['move'] = false;
$wgGroupPermissions['user' ]['read'] = false;
$wgGroupPermissions['user' ]['edit'] = false;
$wgGroupPermissions['user' ]['createpage'] = false;
$wgGroupPermissions['user' ]['createtalk'] = false;
$wgGroupPermissions['user' ]['upload'] = false;
$wgGroupPermissions['user' ]['reupload'] = false;
$wgGroupPermissions['user' ]['reupload-shared'] = false;
$wgGroupPermissions['user' ]['minoredit'] = false;
$wgGroupPermissions['user' ]['purge'] = false;
If you have created a group 'authorized' that you do want to give access
to the pages, just add the following (or replace authorized with the
name of your usergroup):
$wgGroupPermissions['authorized' ]['move'] = true;
$wgGroupPermissions['authorized' ]['read'] = true;
$wgGroupPermissions['authorized' ]['edit'] = true;
$wgGroupPermissions['authorized' ]['createpage'] = true;
$wgGroupPermissions['authorized' ]['createtalk'] = true;
$wgGroupPermissions['authorized' ]['upload'] = true;
$wgGroupPermissions['authorized' ]['reupload'] = true;
$wgGroupPermissions['authorized' ]['reupload-shared'] = true;
$wgGroupPermissions['authorized' ]['minoredit'] = true;
$wgGroupPermissions['authorized' ]['purge'] = true;
More info on GroupPermission can be found at
http://www.mediawiki.org/wiki/Help:User_rights_management
Greetz,
Tom Maaswinkel
Rabobank Nederland
_____________________________________________________
Groep ICT | Systeemrealisatie | Systeemrealisatie |
Kwaliteitsmanagement | SRwiki
T. (030) 21 30122
T. Secretariaat (030) 21 65701 | Fax (030) 21 64661
Postbus 17100 | 3500 HG Utrecht | o.v.v. UH R4114
Gildenkwartier 199 Utrecht | locatie UH R4174
-----Oorspronkelijk bericht-----
Van: wikitech-l-bounces(a)lists.wikimedia.org
[mailto:wikitech-l-bounces@lists.wikimedia.org] Namens sharmishtha gupta
Verzonden: donderdag 10 januari 2008 13:48
Aan: Wikimedia developers
Onderwerp: Re: [Wikitech-l] Restrict anonymous views in mediawiki
We are including mediawiki application into one existing site.
for this we made one wrapper to check the authenticity of the user from
existing site,
We want to allow only two types of users;-
1. which are present into the mediawiki database
2. which are authentic according to our wrapper but new to mediawiki
database
problem is, if someone directly gives mediawiki path to the browser then
he
can easily click " create account" and login into mediawiki.
How we prevent this anonymous user to login into mediawiki.
On 1/10/08, Roan Kattouw <roan.kattouw(a)home.nl> wrote:
>
> sharmishtha gupta schreef:
> > We wish to restrict anonymous views to our mediawiki installation.
Every
> > unauthorized user is to be redirected to a particular page. This
page
> > may not be a wiki page.
> >
> >
> > We wish to restrict anonymous views to our mediawiki installation.
Every
> > unauthorized user is to be redirected to a particular page. This
page
> > may not be a wiki page.
> >
> > Can we do this using LocalSettings.php?
> >
> > If not, do we have any extension/hook for the very purpose?
> >
> >
> >
> > Thanks in advance
> > _______________________________________________
> > Wikitech-l mailing list
> > Wikitech-l(a)lists.wikimedia.org
> > http://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
> >
> What you *can* do, is display a standard error page when anonymous
users
> try to view pages. For this, see
> http://www.mediawiki.org/wiki/Manual:Preventing_access#1.5_upwards_3 .
> You can then edit the MediaWiki:whitelistreadtext and
> MediaWiki:whitelistreadtitle to customize the error message.
>
> Roan Kattouw (Catrope)
>
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)lists.wikimedia.org
> http://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
_______________________________________________
Wikitech-l mailing list
Wikitech-l(a)lists.wikimedia.org
http://lists.wikimedia.org/mailman/listinfo/wikitech-l
================================================
De informatie opgenomen in dit bericht kan vertrouwelijk zijn en
is uitsluitend bestemd voor de geadresseerde. Indien u dit bericht
onterecht ontvangt, wordt u verzocht de inhoud niet te gebruiken en
de afzender direct te informeren door het bericht te retourneren.
Rabobank Nederland is een handelsnaam van de Cooperatieve
Centrale Raiffeisen-Boerenleenbank B.A.Rabobank Nederland
staat ingeschreven bij de K.V.K. onder nr. 30046259
================================================
The information contained in this message may be confidential
and is intended to be exclusively for the addressee. Should you
receive this message unintentionally, please do not use the contents
herein and notify the sender immediately by return e-mail.
Rabobank Nederland is a trade name of Cooperatieve Centrale
Raiffeisen-Boerenleenbank B.A. Rabobank Nederland is registered
by the Chamber of commerce under nr. 30046259