Hey,
I put together a firefox toolbar to help out our editors, if anyone is
interested:
http://www.wikihow.com/Use-the-wikiHow-Editor%27s-Toolbar
It's pretty basic, but gives a notification when an unpatrolled edit
has been made to a featured article and gives some stats on how many
total unpatrolled edits there are and how many logged in users have
edited in the past 30 minutes. A new talk page message indicator also
lights up when you receive a new talk page message.
So far the results have been promising, vandalism on featured articles
has been short lived, and the backlog of unpatrolled edits has gone
way down from consistently over 500 to less than 25 on average.
I'm not sure if other wikis could benefit from this functionality or
if a more general toolbar could be made out of it, but I thought I'd
pass it along.
Travis
We're planning on doing another master split this weekend. The idea is to
split off the following databases onto their own master:
bgwiki
bgwiktionary
commonswiki
cswiki
dewiki
enwikiquote
enwiktionary
eowiki
fiwiki
idwiki
itwiki
nlwiki
nowiki
plwiki
ptwiki
svwiki
thwiki
trwiki
zhwiki
This is a hand-selected list designed to simultaneously balance both the
data set size and the request rate across the two non-en partitions. Some
of the data used can be found at:
https://wikitech.leuksman.com/view/Index_size_versus_traffic
There should be no significant disruption to the main web service. Some
subsidiary services, such as toolserver replication, may be affected.
-- Tim Starling
Hoi,
I met Rob Savoye of OLPC and GNASH fame at FOSDEM. Rob received a lot of
attention as a OLPC developer. Rob had one of the first systems with him. It
was really cool to hold one of these early systems and, I can really imagine
myself using it as an E-book reader :)
The reason why I write about this meeting is that Rob mentioned that he is
also involved in a search engine that will become available under a GPL
license. Rob expects that the software will debut around October. He is
looking for people with an interest in working on such a system.. If you are
interested .. rob at lulu dot com.
Thanks,
GerardM
The install script for MediaWiki 1.9.3 points to
/wikiconfig/index.php
instead of just
/config/index.php
The tr.gz file on Sourceforce has a file system layout with the older
/config and not /wikiconfig directories.
This causes it to fail unless you manually type in
http://<site>/config/index.php
Jeff
I was trying to parse the Wikipedia dumps but unfortunately I find the XML
file that can be downloaded a little hard to parse. I was wondering if there
is a neat way to extract:
1. The article title
2. The article content ( without links to articles
in other languages, external links and so on )
3. The category.
Also I find that there are a large number of tools that allow one to convert
plain text to media wiki text. What if I want to go the other way and
extract information exactly the way it appears on the wikipedia site.
Harish
After running mwdumper downloaded from wikimedia.org and dated 1 feb 06
with either of the following commands:
java -jar mwdumper.jar --format=mysql:1.5
/wikidump/dump/enwiki-20070206-pages.articles.xml | mysql
--default-chracter-set=utf8 -u root -p chrpdb
java -jar mwdumper.jar --format=mysql:1.5
/wikidump/dump/enwiki-20070206-pages.articles.xml | mysql -u root -p
chrpdb
no pages show up in any of the edit histories for the wiki at all with
the new wiki. In other words, it does not appear to work. This has
also been
reported on several other blogs and wikis.
WIKI CONFIG:
This wiki is powered by *MediaWiki <http://www.mediawiki.org/>*,
copyright (C) 2001-2006 Magnus Manske, Brion Vibber, Lee Daniel Crocker,
Tim Starling, Erik Möller, Gabriel Wicke, Ævar Arnfjörð Bjarmason,
Niklas Laxström, Domas Mituzas, Rob Church and others.
MediaWiki is free software; you can redistribute it and/or modify it
under the terms of the GNU General Public License as published by the
Free Software Foundation; either version 2 of the License, or (at your
option) any later version.
MediaWiki is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
more details.
You should have received a copy of the GNU General Public License
<http://chrp.wikigadugi.org/COPYING> along with this program; if not,
write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth
Floor, Boston, MA 02110-1301, USA. or read it online
<http://www.gnu.org/copyleft/gpl.html>
* MediaWiki <http://www.mediawiki.org/>: 1.8.2
* PHP <http://www.php.net/>: 5.1.4 (apache2handler)
* MySQL <http://www.mysql.com/>: 5.0.18
* Extensions:
o Parser hooks:
+ /Cite/ <http://meta.wikimedia.org/wiki/Cite/Cite.php>,
adds <ref[ name=id]> and <references/> tags, for
citations, by Ævar Arnfjörð Bjarmason
+ /ParserFunctions/
<http://meta.wikimedia.org/wiki/ParserFunctions> by
Tim Starling
o Extension functions:
+ wfSetupParserFunctions and wfCite
o Parser extension tags:
+ <ref>, <references> and <pre>
o Parser function hooks:
+ expr, if, ifeq, ifexpr, switch, ifexist, int, ns,
urlencode, lcfirst, ucfirst, lc, uc, localurl,
localurle, fullurl, fullurle, formatnum, grammar,
plural, numberofpages, numberofusers,
numberofarticles, numberoffiles, numberofadmins,
language, padleft, padright and anchorencode
* Hooks:
o LanguageGetMagic: wfParserFunctionsLanguageGetMagic
o ParserClearState: (Cite, clearState)
Jeff
running rebuildImage.php --missing produces a large number of warnings
due to undefined and unreferenced PHP variables. After
running accross a full Wikipedia images directory it misidentifies all
of the .png files as mine type text/plain and tags them as
"possible malicious code". The files are simple png images download
from commons.
Jeff
An automated run of parserTests.php showed the following failures:
This is MediaWiki version 1.10alpha (r20060).
Reading tests from "maintenance/parserTests.txt"...
Reading tests from "extensions/Cite/citeParserTests.txt"...
Reading tests from "extensions/Poem/poemParserTests.txt"...
18 still FAILING test(s) :(
* URL-encoding in URL functions (single parameter) [Has never passed]
* URL-encoding in URL functions (multiple parameters) [Has never passed]
* TODO: Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html) [Has never passed]
* TODO: Link containing double-single-quotes '' (bug 4598) [Has never passed]
* TODO: message transform: <noinclude> in transcluded template (bug 4926) [Has never passed]
* TODO: message transform: <onlyinclude> in transcluded template (bug 4926) [Has never passed]
* BUG 1887, part 2: A <math> with a thumbnail- math enabled [Has never passed]
* TODO: HTML bullet list, unclosed tags (bug 5497) [Has never passed]
* TODO: HTML ordered list, unclosed tags (bug 5497) [Has never passed]
* TODO: HTML nested bullet list, open tags (bug 5497) [Has never passed]
* TODO: HTML nested ordered list, open tags (bug 5497) [Has never passed]
* TODO: Inline HTML vs wiki block nesting [Has never passed]
* TODO: Mixing markup for italics and bold [Has never passed]
* TODO: 5 quotes, code coverage +1 line [Has never passed]
* TODO: dt/dd/dl test [Has never passed]
* TODO: Images with the "|" character in the comment [Has never passed]
* TODO: Parents of subpages, two levels up, without trailing slash or name. [Has never passed]
* TODO: Parents of subpages, two levels up, with lots of extra trailing slashes. [Has never passed]
Passed 493 of 511 tests (96.48%)... 18 tests failed!