Hi,
I have two questions. When looking at the data, there seem to be lines
referring to the same article but to different encoding styles, such
as
en John_Edwards_presidential_campaign%2C_2008 16 16
en John_Edwards_presidential_campaign,_2008 5 5
Is there a way that the wikistats files could take this into account
and report just one line per existing article, no matter what the
encoding is?
Second, I find the number of requests on the individual main pages
rather low. My this be due to the counting style of only counting
those people who for example explicitly asked for
http://de.wikipedia.org/wiki/Hauptseite rather than
http://de.wikipedia.org ?
Mathias
A couple of stylistic comments on this.
> - array_intersect( (array)$removegroup, $changeable['remove'] ) );
> + array_intersect( (array)$removegroup, $changeable[1] ) );
> $addgroup = array_unique(
> - array_intersect( (array)$addgroup, $changeable['add'] ) );
> + array_intersect( (array)$addgroup, $changeable[0] ) );
Numeric indexes are much less readable than string indexes in this
case. If you really want you could use named constants, but you
really should not use magical meanings like 1 = remove, 0 = add. It's
confusing.
> + switch($retval[0])
> + {
While this is a stylistic quibble, MediaWiki code does consistently
use K&R-style braces, with the opening brace on the same line, i.e.:
switch($retval[0]) {
This is documented in docs/design.txt, and helps to give the code a
more consistent look.
> + function fetchUser_real( $username ) {
Likewise, we only rarely use underscores. This should have been
fetchUserReal. Consistent function naming conventions make things
easier to remember and work with: you only have to remember the
function's name, not the name *plus* how the name is formatted.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi,
using MW 1.9.3 (yes - an oldie!) I am looking for a solution to start
the output of Recent Changes List with an offset of records or days.
Source code seems to tell me that there is no way, isn?t it?
With a bigger $wgRCMaxAge we get a "memory exhausted error" when
looking at the older records
(with a bigger "&limit=" ).
Any suggestions?
Uwe (Baumbach)
U.Baumbach(a)web.de
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQFHf5wVFEbayCH8zXkRAjmYAJ4sisCJGhtiZKzO/pMKRtUs7ZGtvACg9k7s
vxatYYTHkvLahtEysPZumYM=
=jhB0
-----END PGP SIGNATURE-----
Hello,
> Hi. Fabulous colleague-shareholder-report you have there; but could
> you fix the transition speed on your powerpoint slides? and the font
> in the footer...
Transition speed? Powerpoint slides? That is a workbook, not a
presentation :) people read it as a pdf book! :)
> Is this a general trend -- remote hands not being available during
> critical moments -- or a chicken-and-egg issue with other elements
> (such as Rob being around)?
It gets complicated with all operations folks being in Europe -
hardware provisioning, reinstalls, etc - is usually managed not from
U.S. That limits the time window for various work. Datacenter ops are
more fixed in time than anything else - and when we needed things to
do ASAP, people were in schools/jobs/etc.
Now with Rob around we can be pretty sure that any critical issue can
be dealt with swiftly, and non critical jobs still in reasonable time.
> This is cool; I had no idea. Is there a longer description of how
> it works?
Probably Mark could tell much more, but generally equal providers
like to exchange their traffic for free - they hop on to traffic
exchanges (think of a huge switch, well, in real world it is bunch of
big switches :), and look for peering partners. Usually it is a bit
of trouble for content providers to get peering, but Google has it
with nearly every major provider. For smaller people like us we have
to be really cool. Mark did lots of social work to present us as cool
in networking world, and we are allowed to use some free resources.
The press release about such activities in Amsterdam was at:
http://wikimediafoundation.org/wiki/AMS-IX
> Any offers of support from multiple really big donated hosting places?
The big problem is that it has to have hardware - lots of it - to
support our cached data set, and if you try to disperse it over
multiple datacenters, really complex problems start like how to
balance the requests so they go to datacenter which would have that
data. Stuff like 'lets have french go here, germans there' adds lots
of administrative work - from maintaining all the platform, to
actually troubleshooting.
Generally, more datacenters are there, more probably we'll miss
problems. This was especially seen by moving Asian languages to Asia
- by having platforms we manage less than Tampa we'd eventually end
up with them working slower, more errors, etc. What was interesting -
no Asians would come and tell that to us - it seems that everyone is
used there to bad international sites. Once we reduced the complexity
and did just what was nice, easy to manage and efficient - we ended
up having people in those countries tell 'yay it is very fast, faster
than local sites'.
So, if someone would come up with big donated place, that would bring
as much caching hardware as we have now in Amsterdam, or in Tampa -
probably it could be possible to consider. Still, for many of these
places to host us would be as expensive as cover our costs. And by
using more sites, our costs increase.
> Are there recent stats on the # of reusers, sites, contributors;
> mediawiki extension variants / repositories outside the main tree?
Well, quite a few people commit extensions to our repositories:
http://svn.wikimedia.org/viewvc/mediawiki/trunk/extensions/
The community outside our #mediawiki cabal has not formed, though
there're various extensions floating around, that are more suitable
for other sites, than us. And there're lots of small wikipedias or
wikimedias all around the world (I run one at job, management never
called it 'wiki' - it sounded as old not very nice software, or
mediawiki).
> Ditto for stats on # and quality of patches from colleagues from other
> shops. Is there a wall of huge-site-heroes for those who release
> their patches?
I mean more the knowledge sharing with people who run big non-
mediawiki sites. Like, folks from Yahoo helped with APC. Google have
nice patches for MySQL. SixApart help with memcached. Even though
they don't directly fix our bugs, their engineers are willing to
communicate, discuss the operations, and sometimes improve software
we use. And in the end they even can buy you a drink. :)
Best regards,
--
Domas Mituzas -- http://dammit.lt/ -- [[user:midom]]
All,
It's been quite awhile, progress has been sporadic, but I've recently
posted an "installable" version of Multilingual Mediawiki (MLMW)
I'm still working on the interface messages, http://mw.visc.us/ is a bit
"buggy", but coming along.
Quite simply, I went through more files than I ever thought I'd need to,
and added code for an additional column, the language sub-tag ID.
I also wrote a parser to load language tag and sub-tag data directly
from the distributed tab and text files.
The code is activated with $wgLanguageTag = true; and so, when it is
false, the behavior is exactly as it is in the existing MediaWiki
distribution.
We've added a "Set" namespace, which acts as the interlanguage linking
system.
I'd like feedback, and personal attacks ;-)
Keep the bug reports on mw.visc.us, and off of this mailing list.
http://svn.wikimedia.org/viewvc/mediawiki/branches/mlmw/
I'm going to try to fill in the missing features before updating the
code to the latest MediaWiki revision.
I'll post a patch-file to this list in the coming week, which will be
perhaps, easier to critique.
In the long-run, I'd like MLMW to support full language tagging and
filtering.
In the short-run, I'd like MLMW to be a workable option for smaller
wiki's and multilingual communities.
Finally, I'd like comments. Should I make MLMW an extension instead of a
patch to the core MediaWiki code.
If so, I would need to add quite a few new hooks to MediaWiki, some of
them in awkward locations.
Thanks all,
-Charles
Our ever-innovative friends the spammers have discovered subscribing
to Mailman lists. We got one spam posted to wikien-l from a subscribed
address today. It got caught in the mod queue, but nevertheless ...
Is this a well-known phenomenon? Is there anything like captchas in
Mailman to avert this sort of thing?
- d.
On 1/4/08, catrope(a)svn.wikimedia.org <catrope(a)svn.wikimedia.org> wrote:
> Revision: 29261
> . . .
> * Unbroke Special:Userrights for wikis without pretty URLs
> ** This probably shouldn't be hard-coded the way I did it
> . . .
> + $form .= Xml::hidden( 'title', 'Special:Userrights' );
Strange. I'm not sure how to fix this more correctly. You would
think that action="http://example.com/w/index.php?title=Special:Userrights"
plus a parameter user=Username would give a URL of
http://example.com/w/index.php?title=Special:Userrights&user=Username,
appending the parameters to the URL correctly, but you're right, it
seems to give http://example.com/w/index.php?user=Username. Maybe
that's why it was POSTed in the first place.
Perhaps the best thing to do would be to just always use non-pretty
URLs for this, the way we do all the other times we use URL
parameters. I don't have time to implement that this minute,
unfortunately. Maybe later, if no one else does it first.
Hi all together,
I am trying to develop a Visio to Image Converter fort he Mediawiki. It
should work like this:
- Visio file is uploaded in the Wiki
- Image file is created on the Server
- Image file is automatically uploaded in the Wiki
For the automatic-upload, I would like to create an Uploadpage and fill in
the Source and the destination file which is normally done by the user with
the values I know from the server.
The problem is now, that I have looked through the code quite a while but I
did not find the right place to call the upload.
I am using the MediaWiki 1.10.1. I know this is an older version but I dont
really want to change it.
Can anyone help me or any suggestions?
Greetings
Christoph
Hello everybody,
thanks for your many replies. i meant normal pages in namespaces created
by me. I implemented everything via the skin php. For the ones who are
interessted in the code:
<?php
$test=$wgUser->getGroups();
$length= count($test);
$count=0;
while ($count < $length){
if( $test[$count]=="usergroup1" ||
$test[$count]=="sysop"){
?>
<li>
<a
href="http://www.mywiki.de/index.php?title=Usergroup1:Home">Usergroup1
Portal</a>
</li>
<?php
}
if ($test[$count]=="sysop"){
?>
<li>
<a
href="http://www.mywiki.de/index.php/Docu:Startseite">Documentation</a>
</li>
<?php
}
$count++;
}
?>
Thanks a lot for the hints
julia
Is anyone interested in a Wikimedia / MediaWiki / whatever meet at the
FOSDEM conf in Brussels? (23-24 February 2008)
May be a little late to get people organized that aren't already
planning to go, but I imagine some interested parties are already going
and it might be nice to try to arrange something.
http://www.fosdem.org/2008/
-- brion vibber (brion @ wikimedia.org)