When I grep for "<contributor>" or "<revision>" in
svwiki-20080310-pages-meta-history.xml I find 5,822,491
occurrences. But [[sv:Special:Statistics]] says there have been
6,246,812 edits. What are the 424,321 edits in between? Deleted
pages?
According to [[sv:Special:Statistics]] there are 58,087 user
accounts, but <contributor><username> has 28,416 distinct values.
Is it realistic that half of all registered usernames have never
contributed a single edit (to non-deleted pages)? Can we find out
what happened to them? Did they write spam that was deleted and
the username permanently blocked? Did they just register their
name to stop others from doing so? Or did something go wrong
during the registration?
Of those who did contribute something, of course most usernames
only made very few contributions. This is a long tail. So how do
we separate the regular/serious/active contributors from the
occassional ones? In [[m:board elections]] to the WMF, a limit of
400 edits is used, and this threshold is as good as any.
In <contributor><username> of the sv.wp dump there are 900 names
(and 104 addresses in <contributor><ip>) that have contributed 400
revisions or more (to non-deleted pages). Of these 900, some 80
have names containing "bot" and some are sock puppets, but I guess
that 800 could be eligible to vote. There are 81 admins on sv.wp.
Is one admin per ten eligible voter volunteers a "normal"
quotient? It also means we have one eligible voter per 12,500
speakers of the Swedish language (800 out of 10 million).
I think 800 is the number of volunteers that should be mentioned
rather than the 58,087 mostly inactive usernames.
--
Lars Aronsson (lars(a)aronsson.se)
Aronsson Datateknik - http://aronsson.se
Hello,
I recently made an extension that I uploaded too
http://svn.wikimedia.org/svnroot/trunk/extensions/Click/Click.php and
it works perfectly. Unfortunately it won't work inline; no matter what
I do it always starts a new paragraph or adds an opening <br> - does
anyone know how I can fix this? Maybe it's because it returns HTML,
but I think there should still be a way to make it inline.
MinuteElectron.
Hello
Is it somehow possible to use an extension or hook to generates content which
spans several sections i.e. affects the table of contents?
My current results only render a "== title ==" like a normal title would
be rendered but it still does not appear in the table of contents.
My favourite would be to have this included into a template, maybe
sombody knows an existing extension which does something similar.
I looked at the tinyMCE editor & co but they seem only to use
the "save-after-editing" hook whereas I need something that makes
a database query at every view.
bye,
-christian-
For your interest, consideration and API-geeking. Presumably will
involve reading template parameters.
- d.
---------- Forwarded message ----------
From: Brianna Laugher <brianna.laugher(a)gmail.com>
Date: 30 Mar 2008 15:10
Subject: [Commons-l] Commons API
To: commons-l <commons-l(a)lists.wikimedia.org>
Hi,
There is an interesting Firefox extension called Zemanta, that works
with some blogging platforms, to suggest images to match a blog post
you type. One of the sources they use is Commons.
See this post (comments) for a description of how it works and what
it's lacking:
<http://brianna.modernthings.org/article/97/zemanta-wikimedia-commons-for-bl…>
In particular,
"If you have an idea how to correctly capture wikipedia images
attribution (something that would assure at least 50% correct coverage
from 2.8M images), please help us! ;)"
Really, we can't blame people too much for not providing attribution,
when we don't give that information in a standard way, or give a
standard way of accessing it.
Now is as good a time as any to formally write an API to recommend for
other people to use. Aside from the MediaWiki API, there are three
main things I can think of that are often needed to be automated:
* identify any "problem tags" (files with deletion markers shouldn't
be used or indexed by third parties)
* extract license name(s) and URL for a given file
* extract author attribution string for a given file
So I propose we put our heads together and figure out the most robust
algorithm for each of these, and provide some sample code for each.
I made a start here:
<http://commons.wikimedia.org/wiki/Commons:API>
Contributions and feedback welcome...
cheers,
Brianna
--
They've just been waiting in a mountain for the right moment:
http://modernthings.org/
_______________________________________________
Commons-l mailing list
Commons-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/commons-l
On Thu, Mar 27, 2008 at 2:34 PM, <raymond(a)svn.wikimedia.org> wrote:
> + $val = trim( ini_get( 'upload_max_filesize' ) );
> + $last = ( substr( $val, -1 ) );
> + switch( $last ) {
switch is case-sensitive, the suffix in the config file is not. Don't
you need a strtoupper() on $last?
> + case 'G':
> + $val2 = substr( $val, 0, -1 ) * 1024 * 1024 * 1024;
> + break;
> + case 'M':
> + $val2 = substr( $val, 0, -1 ) * 1024 * 1024;
> + break;
> + case 'K':
> + $val2 = substr( $val, 0, -1 ) * 1024;
> + break;
> + default:
> + $val2 = $val;
> + }
> + $val2 = $wgAllowCopyUploads ? min( $wgMaxUploadSize, $val2 ) : $val2;
> + $maxUploadSize = wfMsgExt( 'upload-maxfilesize', 'parseinline', $wgLang->formatSize( $val2 ) );
You seem to be assuming that nobody is setting upload_max_filesize to
an invalid value, or that if they do, PHP will somehow sanitize it so
that it fits one of those cases. Is that the case? What happens if
you set upload_max_filesize to "jagdajgadk" or '<span
onload="alert(\'Evil!\')"></span>' or something? If this does work,
you should add a comment that testing indicates that PHP seems to
guarantee that a value in this form is passed.
I want to propose new mechanisms for Support of Import/Export Wiki Pages
from/to OpenOffice Documents
I propose the use of the JOD Converter (
http://www.artofsolving.com/opensource/jodconverter) to handle different
OpenOffice documents formats (.doc, .odt, etc.). The motivation of my project
of the yet proposal Create/Edit Api issue, which made me decide to inject
more functionality than only create wiki pages, that's why I propose
basically three new embedded mechanisms in MediaWiki.
- New Api issues: import_document, export_document
- New ToolBox: ImportDocument. the Upload toolbox could be modified in this
point, nevertheless the main goal of upload is to save a file into the
filesystem of the server creating a namespace for this, like:
[[Image:<image>]] or [[MediaW:<file>]], so I prefear to create a new toolbox
to create a new wiki page, not upload the file.
- Embedded New tab: ExportDocument. Every time a wiki page is printed on the
screen, there could be an option to select if we want to export the document
to some supported format of document.
JOD Converter is built in Java, it can be treated like a tool(executable
jar) or a web service. The formats supported are .doc, .odt, .ppt, .ods,
.xls, .odp, .ppt, .pdf, ... and others more. There is a direct conversion
from text formats to wiki code, it doesn't happen the same for spreadsheets
or powerpoint formats, in these last formats my idea is to get an html
output format from the conversion and due to duplicated < html > tags, I
will replace this last tags with another for example <jodHTML>, so it can be
stored.
I propose these mechanisms because it would be very helpful for external
applications to import/export documents from MediaWiki, this is the reason
of the new api issues, and I thought about the toolbox and the export
document tab because the MediaWiki User may find it very useful.
I hope any feedback from you, I have played with mediawiki modifying some
extensions before.
Thank you all.
--
Carlos Eduardo Atencio Torres
National University of San Agustin
Arequipa-Peru
Hi All,
So, I went off and learned about compiling and linking and C++ and
makefiles, and I've got a very early version search daemon working. It's
got a couple of minor bugs, but I wanted to bring it up again as a possible
unencumbered search and indexing solution.
I set up a project page at clucened.com, and there is a pointer to my test
implementation. Currently it isn't pointed at the categories index, but
I'll do that shortly. I also posted the source code for the searcher
daemon. I will work on fixing the bugs, implementing paging, etc., etc.,...
I'd love any feedback. Yes, I already know my code is really rough and
needs a lot of improvement and more error checking, but it'll get there.
("Release early and often").
Aerik
Hello to all the people.
what happened with the static html dumps?
Are them no longer available?
The current dump (02/2008) seems to be stopped.
Regards
Roberto Gerlando
--
View this message in context: http://www.nabble.com/what-happened-with-the-static-html-dumps--tp16296458p…
Sent from the Wikipedia Developers mailing list archive at Nabble.com.
simetrical(a)svn.wikimedia.org wrote:
> Revision: 32497
> Author: simetrical
> Date: 2008-03-27 13:31:32 +0000 (Thu, 27 Mar 2008)
>
> Log Message:
> -----------
> I don't know what problems in Special:Log r26850 was referring to (more context, please?), but without delimiters, multiword renames can be confusing.
>
> Modified Paths:
> --------------
> trunk/extensions/Renameuser/SpecialRenameuser.i18n.php
>
> Modified: trunk/extensions/Renameuser/SpecialRenameuser.i18n.php
> ===================================================================
> --- trunk/extensions/Renameuser/SpecialRenameuser.i18n.php 2008-03-27 12:11:57 UTC (rev 32496)
> +++ trunk/extensions/Renameuser/SpecialRenameuser.i18n.php 2008-03-27 13:31:32 UTC (rev 32497)
> @@ -31,7 +31,7 @@
>
> 'renameuserlogpage' => 'User rename log',
> 'renameuserlogpagetext' => 'This is a log of changes to user names',
> - 'renameuserlogentry' => 'has renamed $1 to $2',
> + 'renameuserlogentry' => 'has renamed $1 to "$2"',
> 'renameuser-log' => '{{PLURAL:$1|1 edit|$1 edits}}. Reason: $2',
> 'renameuser-move-log' => 'Automatically moved page while renaming the user "[[User:$1|$1]]" to "[[User:$2|$2]]"',
> );
>
>
>
> _______________________________________________
> MediaWiki-CVS mailing list
> MediaWiki-CVS(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/mediawiki-cvs
As far as I remember, the link was not parsed in Special:log and was shown
literally.