I want to bring to the attention of interwiki bot programmers that the
Esperanto Wikipedia has recently changed its title policy for names.
Previously family names were completely capitalized, so, for example,
my name would appear as Chuck SMITH. This was done to help people not
to be confused with Asian names where the family name comes first.
We have now changed our policy so that titles will appear "normally"
and the name with proper family name capitalization will only appear
in the first line of the article. We are currently in the process of
writing a bot to convert all of our titles in biographies to the new
system.
For those of you who speak Esperanto and are interested in the
details, the vote can be found at
http://eo.wikipedia.org/wiki/Vikipedio:Baloto_pri_majuskligo_de_familiaj_no…
showing the results about the change 21-19-3, for-against-neutral.
Best wishes,
Chuck SMITH ;-)
An automated run of parserTests.php showed the following failures:
Going to run database updates for fiveralpha
Depending on the size of your database this may take a while!
..hitcounter table already exists.
..querycache table already exists.
..objectcache table already exists.
..categorylinks table already exists.
..logging table already exists.
..validate table already exists.
..user_newtalk table already exists.
..transcache table already exists.
..trackbacks table already exists.
..externallinks table already exists.
..job table already exists.
..langlinks table already exists.
..have ipb_id field in ipblocks table.
..have ipb_expiry field in ipblocks table.
..have rc_type field in recentchanges table.
..have rc_ip field in recentchanges table.
..have rc_id field in recentchanges table.
..have rc_patrolled field in recentchanges table.
..have user_real_name field in user table.
..have user_token field in user table.
..have user_email_token field in user table.
..have user_registration field in user table.
..have log_params field in logging table.
..have ar_rev_id field in archive table.
..have ar_text_id field in archive table.
..have page_len field in page table.
..have rev_deleted field in revision table.
..have img_width field in image table.
..have img_metadata field in image table.
..have img_media_type field in image table.
..have val_ip field in validate table.
..have ss_total_pages field in site_stats table.
..have iw_trans field in interwiki table.
..have ipb_range_start field in ipblocks table.
..have ss_images field in site_stats table.
..already have interwiki table
..indexes seem up to 20031107 standards
Already have pagelinks; skipping old links table updates.
..image primary key already set.
The watchlist table is already set up for email notification.
Adding missing watchlist talk page rows... ok
..user table does not contain old email authentication field.
Logging table has correct title encoding.
..page table already exists.
revision timestamp indexes already up to 2005-03-13
..rev_text_id already in place.
..page_namespace is already a full int (int(11)).
..ar_namespace is already a full int (int(11)).
..rc_namespace is already a full int (int(11)).
..wl_namespace is already a full int (int(11)).
..qc_namespace is already a full int (int(11)).
..log_namespace is already a full int (int(11)).
..already have pagelinks table.
..templatelinks table already exists
No img_type field in image table; Good.
Already have unique user_name index.
..user_groups table already exists.
..user_groups is in current format.
..wl_notificationtimestamp is already nullable.
..timestamp key on logging already exists.
Setting page_random to a random value on rows where it equals 0...changed 0 rows
Initialising "MediaWiki" namespace...
Clearing message cache...Done.
Done.
Hello, everyone. I'm writing to this group because Wayne Saewyc tells me
that you might be interested in what I'm trying to present. My name is
Robert Rapplean, and I'm a software engineer and political analyst. You can
understand that I've spent an immense amount of time attempting to get ideas
across in the massively multiuser asynchronous world of the Internet. Over
the years I've developed a detailed understanding of the problems inherent
in trying to persue a logical argument in this kind of environment, and I've
used that understanding to design a tool that addresses these problems.
I am a Wikipedia user, and make it a point to contribute to the articles
when I find I have more expertise than those who have already presented
information. After spending quite a bit of time unwinding the sometimes
barely comprehensible dialogs that have occured on the discussion pages of
the articles, I've concluded that this particular environment would benefit
greatly from the implemenation of exactly the kind of tool that I've
designed.
With that in mind, I'm going to attempt to describe the idea to you. The
remains of this email is a short description of the design of the tool and
its reason for being structured the way it is.
In my examination of online debates, I've noted a small bestiary of bad
debating habits, almost all of which fall under the categories of "casual
debater" or "hostile debater". Casual debaters are those that don't take
the time to paruse the previous debate that has occured on a topic. They
tend to re-submit points that have already been debated ad-nausum and
require re-iteration of important talking points. Everyone starts out in
this category, but the casual debater gets bored before they get beyond that
point. Because online debating tools are very poor at organizing previous
information, it quickly becomes a prodigious effort to get up to speed on a
debate. This means that any forum which has enough contributors to form a
decent consensus also has a steady stream of neophytes clogging the
communication streams with off-the-cuff comments and other distractions.
An unfortunate side effect of this is that many of the good debaters get to
the point where they're tired of re-arguing the same points over and over
again. When the debate follows those lines yet again, they tend to quit
contributing, and may leave the forum entirely.
Hostile debaters are those who aren't there to exchange ideas so much as to
spout them. In other words, they're all mouth and no ears. They don't want
to find the truth, they want everyone to accept their personal truth. Their
entire purpose on the forum is to get a personal thrill from defeating the
opposition through wit, strategy, and tactics. As a result, they persue an
argument via the well-worn tactics of attacking where the enemy is weak and
retreating where the enemy is strong. If they can't win a particular point,
they'll shift the topic to something that they think the opponent might be
less strong on. They'll continue stringing their opponents on a line of
topics until they can find one that the opponent isn't as well versed on,
and then stand on it like a bastion of safety, insisting that it's the only
valid perspective from which to view the concept. If they can't find a weak
point, they'll circle back around to the original topic hoping for a second
try or resort to standard logic errors like ad-hominen attacks or faulty
analogies.
Although the design of the tool addresses many other issues (like ballot box
stuffing and squeaky wheel effects), these should be adequate to understand
the reasoning behind the basic structure I'm about to explain. As I go
along, I'm going to compare my design to existing online collaborative
tools, like wikis and forums.
In order to deal with a lot of the tactics of the hostile debater, I started
by removing the linear nature of wikis and forums. You can't lead a person
in circles if you're glued to the spot. With this in mind, the base unit of
this tool is a conjecture, something like "alcoholism is a disease". Each
person may (not must) make one statement about the conjecture. They can
change the statement any time that they like, but that one statement must be
a summation of their entire opinion on that conjecture. Then everyone gets
to vote on the statement that best matches their personal opinion. If none
of them match closely enough, they can make their own statement.
Statements are ranked based on popularity. Additionally, the writer of the
statement indicates the bias of their statement. A bias states that the
conjecture is:
1. factual (based on repeatable phenomena)
2. true (not based on repeatable phenomena, but enough evidence exists)
3. unproven (enough evidence does not exist one way or the other)
4. unprovable (the conjecture requires evidence that is not obtainable)
5. unsupported (the evidence suggests that the conjecture is not true)
6. false (repeatable phenomena disproves the conjecture conclusively)
For the purposes of determining the validity of a conjecture, all statements
with 1 & 2 add their votes together, all with 3 & 4 go together, and all
with 5 & 6 go together. This creates a distinct identification of the
participant's current consensus on the matter.
Since people need a place to ask questions and discuss ideas, a standard
message list should be matched with the conjecture, but it is strongly
suggested that all messages on the list group expire and vanish after 30
days or so to encourage the participants to embody their ideas in their
statements, not in their messages.
There's a further aspect of this. Every conjecture debated tends to result
in child conjectures, for instance "a disease is anything which effects the
wellness of an individual". These become their own conjectures, with their
own statements and (importantly) its own message list. It is voted on to
determine its individual validity, and it gets linked to the parent
conjecture. Participants in the parent conjecture can then rate the
relevance of the child conjecture to the parent conjecture, and take into
account the most relevant child conjectures when voting on a statement.
Taking this a step further, the conjectures can then be all reused. For
instance, a conjecture like "the will of god is unknowable" could be used
again and again, being attached to a very wide range of parent conjectures
without having to re-create it and re-argue it every time.
The final result would be that, for each conjecture, all of the reasoning
behind the current decision would be laid out in a readily examinable
format, ordered by relevance. This makes things much, much easier for the
casual arguer. The modular format also makes it extremely easy to slap a
"logic foul" conjecture on anyone who presents falacious arguments. The
non-linear format totally wrecks topic-shifting tactics, and the voting
system indicates not just how people feel about something, but how firmly
they feel about it.
I think that'll do it for an introduction. If this is interesting to you,
please let me know and I can provide you with more details.
Yours,
Robert Rapplean
I had several emails in my inbox this morning because my tools were
returning incorrect data.
It appears that about 6 hours ago enwiki, and only enwiki, stopped
replicating. I now see that the Wikimedia developers have moved enwiki
to its own cluster without warning or coordination. Not like they'd
have anyone to warn: If I can't contact someone with authority, no
doubt that they are unable as well.
Based on the prior track record I expect this to never be fixed, just
as text replication was never fixed and the replication of the asia
cluster wikis was never fixed.
Toolserver had become mostly useless for many of my projects without
high speed text access, now it is almost completely useless for all of
my projects... and I'm tired of catching flack for the unreliability
of the server. People have depended on the tools I provided, but are
constantly let down by the unreliability of the service.
When I was granted access and when I spent many hours writing software
I had an expectation that someone would be at least trying to maintain
the system. I never expected that it would be ignored, that my work
would go to waste, and that if I offered to do the work I too would be
ignored. When I provided tools that allowed enwiki users to adjust
their processes and work more effectively, I believed that they could
rely on these tools working most of the time. I understand now that I
was mistaken.
I am tired of wasting my time.
Because I can't even expect the nonexistent toolserver administration
to perform the trivial action of turning off my account, I have
deleted my ssh authorized key... thus my account is effectively
disabled. So don't worry, you can go on doing nothing.
On 4/10/06, kate(a)zedler.knams.wikimedia.org
<kate(a)zedler.knams.wikimedia.org> wrote:
> hello,
>
> the account expiration date was originally scheduled for April 1st, but has
> been extended to May 1st. on this date, all accounts will expire (and no
> longer be usable) except those which have had the expiration date extended.
>
> if you have an account, and you would like to keep it:
> - if you have one or more working projects, please describe these (preferably
> with examples, URLs, etc.)
> - if you do not yet have anything ready (particularly if you're a new user),
> please describe what you intend to work on. a rough estimate of when you
> expect it to be ready would be useful. if some issue is holding you up
> (e.g. lack of text access), please mention that.
>
> if you no longer wish to use your account, please say so.
>
> this information should be mailed to <dab(a)daniel.baur4.info> and cc'd to
> <zedler-admins(a)wikimedia.org>. (there's no particular deadline, but if you
> wait until one day before the expiration, you might find that your account
> expires because no-one managed to look at it yet...)
Hi, my name is Felipe Pablos, and I build a license extension for mediawiki
1.4.7. Now I have seen that Wikimedia Commons has a license extension quite
similar to the one i developed. I have time problems to adapt my extension
to new versions of mediawiki, but i would be great if the commons license
extension were available and configurable. Could some one tell me where is
the code of Commons' licenses if it is available?
Thanks
Hi All,
Is there some way to reliably get a log of what's changed recently in
the main tree through the web interface to Subversion?
I'm trying this URL:
http://svn.wikimedia.org/viewvc/mediawiki/trunk/phase3/?view=log
... however more often than not it keeps timing out; but when it
succeeds, it gives me the history all the way back to April 2003 (a
huge log that is around 3.6 MB in size).
Is there for example some way to say "only show the last 50/100/200
commits", which presumably would avoid the timeout?
I had a look at:
http://www.viewvc.org/url-reference.html#log-view
... but I couldn't see a suitable magic incantation, so I'm suspecting
it may not be possible, but if anyone knows different please let me
know.
All the best,
Nick.
An automated run of parserTests.php showed the following failures:
A database error has occurred
Query: SELECT ll_lang,ll_title FROM `parsertest_langlinks` WHERE ll_from = '1' FOR UPDATE
Function: LinksUpdate::getExistingInterlangs
Error: 1146 Table 'fiveralpha.parsertest_langlinks' doesn't exist (localhost)
Backtrace:
GlobalFunctions.php line 602 calls wfBacktrace()
Database.php line 473 calls wfDebugDieBacktrace()
Database.php line 419 calls Database::reportQueryError()
Database.php line 818 calls Database::query()
LinksUpdate.php line 535 calls Database::select()
LinksUpdate.php line 108 calls LinksUpdate::getExistingInterlangs()
LinksUpdate.php line 80 calls LinksUpdate::doIncrementalUpdate()
Article.php line 2272 calls LinksUpdate::doUpdate()
Article.php line 1246 calls Article::editUpdates()
parserTests.inc line 666 calls Article::insertNewArticle()
parserTests.inc line 144 calls ParserTest::addArticle()
parserTests.php line 54 calls ParserTest::runTestsFromFile()
<!-- Served by leuksman.com in 0.737 secs. -->
The English Wikipedia database has now been moved to its own dedicated server cluster. The master is
db3, the slaves are db4 and ariel. This was done to reduce write load and improve cache efficiency.
Commons is not replicated to the new cluster, instead we've set up a system based on "query groups"
to send commons reads to the main cluster.
-- Tim Starling
Zend has kindly donated a 3-user perpetual license for Zend Studio Professional.
This is Zend's IDE thingy for PHP; most interestingly it includes a debugger,
profiler, and various such extra tools which can be helpful for tricky problems.
http://www.zend.com/products/zend_studio
Domas has expressed specific interest in this; I've got two other seats up for
grabs if anyone thinks they'll really use them.
-- brion vibber (brion @ pobox.com)
> Presenting tens of thousands of people with a donation link that WE KNOW
> WILL NOT WORK WHEN THE
> MESSAGE IS DISPLAYED not only looks stupid, but is preventing many
> thousands of potential
> donations.
Maybe the linked donation form could be at mediazilla's server? Bugzilla
was
working fine during the outage...