Hi,
I was curious about what process others are using to deploy new software
across servers. Does wikipedia use capistrano or something similar?
Thanks in advance for any info,
Roger
A couple times a year (such as about an hour ago) somebody does
something like trying to delete the Wikipedia:Sandbox on
en.wikipedia.org, which reaaalllly bogs down the server due to the large
number of revisions.
While there are warnings about this, I'm hacking in some limits which
will restrict such deletions to keep the system from falling over
accidentally.
At the moment I've set the limit at 5000 revisions (as
$wgDeleteRevisionsLimit). The error message right now is generic and
there's no override group with 'bigdelete' privilege live, but it should
be prettified soon.
(Note -- the revision count is done with an index estimate currently, so
could overestimate on some pages possibly.)
-- brion
Hello
I have started to use moodle and I am very pleased by its forum
module. Because
- there is a nice threading
- your receive a notification, if somebody answers a post of you.
Now moodle relies on PHP, apache (or other servers), an a mysql (or
other) database.
Could its forum module be used for the mediawiki discussions page?
which frankly is quite a PITA.
I know that the subject of threading in mediawiki discussions page
pops up from time to time, but given that moodle offers a better
interface the question is why can't mediawiki not benefit from it.
regards
Uwe Brauer
A few hours ago we seem to have lost most of the content of our images
Squid caches. While the Squid servers fill up their caches again, image
loading will unfortunately be slow and unreliable, because the image
server backend is overloaded. The situation is recovering slowly, but it
can take a while before the speed is back at a normal level.
This incident was most likely caused by a change I made to an ACL
shortly before it started, although as of yet we don't quite understand
why, as it seems rather unrelated. Investigation continues...
--
Mark Bergsma <mark(a)wikimedia.org>
System & Network Administrator, Wikimedia Foundation
Hi all!
I've just registered the mailing list, as I have some technical issues about
MediaWiki customization.
I'm right now a senior studying Informatics in University of Washington, and
I'm doing my capstone project on designing a Wiki/Knowledge base on Home
network issues.
I've chosen MediaWiki to be a starting point since I think it supports many
ideas of what I'm trying to accomplish.
One of the problem I have now is that I want to change the order of topics
in each article depending on the user who views it.
Since each article represents a network device, and each topic represents a
network problem, I want to sort the problems depending on the user network
device background information and user input device background information.
I don't know if this sounds too vague, or I'm even at the right place to ask
this.
But I wonder if you guys can give me a direction where to start looking
about adding this type of logic?
Let me know if I should ask this somewhere else!
Thank you!!
Timothy Chen
On 23/01/2008, Gerard Meijssen <gerard.meijssen(a)gmail.com> wrote:
> http://meta.wikimedia.org/wiki/Using_OmegaWiki_for_Commons is the way in
> which Commons can have tagging with multi lingual functionality. This is
> what I think Commons needs. I think this is the time to start doing this.
That looks like just what I was thinking of (and yes, tagging could
really do with multilingual functionality).
Devs - what are the prospects of OmegaWiki, or at least this bit of
its functionality, going onto Commons? The thing really crippling
Commons is bad search.
(Earlier today I floated the idea of using a template to hold tags,
and a tag search on the toolserver that would read and index them
every day, Not that I could write the latter. A bit kludgy too.)
- d.
On 23/01/2008, Gregory Maxwell <gmaxwell(a)gmail.com> wrote:
> Success is less about the content, and more about *the collection* and
> the search. Google made its first zillion billion not because it
> controlled a lot of content but because it helped people find a lot of
> other people's content.
The search, the search, the search! "We have Wikimedia Commons, with
millions of freely-reusable pre-cleared photos. It's like Getty Images
with a really crap search."
(No, not even Mayflower has fixed that.)
> I think this is an area where commons really has something to offer:
> Universally editable metadata could make for impressive search power,
> and free licensing means all images are available for use (sometimes,
> with copyleft works, at the price of freely releasing your own work).
If turning categories into tags within Mediawiki is unlikely to happen
soon (I recall the previous experiment where on Postgres it was lovely
and on MySQL it was horribly slow ... and there's zero chance of
Wikimedia abandoning MySQL in the foreseeable future) - what about a
"tags" template for image pages, which can then be parsed by a search
application on the toolserver? Update daily or something. Then an
image can have 10 or 100 or 1000 tags, even if that many Mediawiki
categories would be problematic to display or process. Sound feasible?
(cc to commons-l and wikitech-l)
- d.
Hello,
Sometimes the problem comes up that an external tool needs to verify
whether a user is the same user as on a wiki. For example one may want
an opt-in system for an editcounter [1]. Or you are organizing a
competition in which every Wikimedian can vote [2]. Currently such
authentication is done with various hacks, such as posting some code
in an edit summary, abusing the mail features of mediawiki or having
the user post a certain token to a wiki page. This is not ideal and
moreover quite complicated for the non tech savvy people.
I would therefore like to have some way to verify a users identity
without asking his password or posting some weird stuff to some page.
What I think would be a solution:
# The user visits http://externaltool.com/authenticate and submits his
username (and wiki).
# The tool will add the user to its database, generate some random
token and redirects the user to
http://wiki.org/wiki/Special:VerifyUser?token=secret
# MediaWiki will then check whether the user is logged in and if not
ask to login.
# MediaWiki will do some magic and generate a new_token from token and
redirect the user back to the tool, adding the new_token to its url
# The tool will then query the wiki with its token and the new_token
and ask whether the two tokens form a valid pair
The downside of this would be that there are many redirects involved
and also quite a lot of traffic. Would such a thing have any chance of
being enabled on Wikimedia wikis if developed?
Cheers,
Bryan
* [1] http://tools.wikimedia.de/~interiot/cgi-bin/editcount_optin.cgi?user=Common…
/ http://tools.wikimedia.de/~interiot/cgi-bin/editcount_optin.cgi?user=Common…
* [2] http://commons.wikimedia.org/wiki/Commons:Picture_of_the_Year/2007/Voting