Hello,
I have made some profiling loading my mediawiki main page (
http://www.twenkill.net/wiki/ ) wich use phptal monobook skin and is set
to use the french language. For profilling I used "Advanced php profiler".
The first trouble I found is that loading the page use up to 8417176
bytes of memory ! It looks like mediawiki is including every single
possible classes script even if it will not use it.
I noticed some possible candidates for fixing:
MEMORY:
1/ Skins:
I use the monobook phptal as default skin but the software also loads
the three skin classes SkinStandard.php, SkinNostalgia.php,
SkinCologneBlue.php . They should only be called if they are the default
skins or the current user skin. Save close to 200KB memory.
2/ Feed:
The feed class is loaded in Skin.php only to retriew the available feeds
(through an array in feed.php), that call should only be made for
syndicated pages.
Similary, the LogPage, DifferenceEngine and SearchEngine aren't usefull
for articles. Save 30KB, 340KB and 200KB respectivly.
CPU:
1/ looks like languagefr->ucfirst use about 20% of the total script time
for the sole purpose of uppercasing the first char of a given string by
calling strtr(). Maybe the strings could be declared as utf-8 and a php
function like uppercase() called instead ? I think Tim Starling looked a
bit at this.
2/ wfprofileout use 5% although it's an empty function. Using single
quote instead of double seems to have improved it a bit :)
3/ do_html_entity_decode use close to 4% maybe it could be replaced by
the php function html_entity_decode() that looks to make the same task
(php >= 4.3.0)
That's all for this night. For the memory issue, it looks like a fix
would be to instant a user object first (to get its options) then an
article object (to see if it's a feed / special page) then create skin.
--
Ashar Voultoiz
[[fr:Utilisateur:Hashar]]
Hi,
I couldn't find a forum to ask for help other than this one (although I am aware
that it might be somewhat off-topic, sorry, I am desperated after 2 days of
unsuccessful trials right now).
I downloaded the english version of the whole wikipedia-history (some 10GB) and
installed it in a MySQL-database. The data is present. I can access it by e.g.
"mysql -> SELECT old_title FROM old;" or with a perl-script that opens the
database via DBI, executes any SQL-script and get the results on my screen. On
this side no problem. What I can't access is the actual (uncompressed, human
readable) content of the old_text-column itself. It's compressed, ok. I got that
from the php-source of mediawiki. But what's wrong with the following code?
my $sth = $dbh->prepare("SELECT old_title, old_text FROM old");
$sth->execute;
while (my @row = $sth->fetchrow_array ) {
if(defined $row[0]) {
print $row[0] . " ";
}
if(defined $row[1]) {
my $a = uncompress($row[1]); # dummy variable
print $a . "\n";
}
}
$a is not defined after this call. If I implement another possibility of
Compress::Zlib like:
sub decompressText {
my $d;
my $status;
my $out;
my $out2;
($d, $status) = deflateInit(); #-Level => Z_BEST_COMPRESSION);
$status == Z_OK
or die "INIT failed\n" ;
($out, $status) = $d->deflate($_[0]) ;
print "STATUS " . $status ."\n";
$status == Z_OK
or die "DEFLATE failed\n" ;
($out2, $status) = $d->flush() ;
$status == Z_OK
or die "FLUSH failed\n" ;
if(defined $out) { print "DEFINED.\n";}
if(defined $out2) { print "DEFINED2.\n";}
my $z = $out . $out2;
return $z;
}
I just get decompressText(old_text) to be the binary-stuff I got with the simple
SELECT-statement without any (de)compression at all. According to the perldoc
both failures indicate that the uncompression wasn't successful.
So my question is: how do I uncompress "old_text" in the table "old"????
Please, please help. I tried it for 2 days and I am pretty sure the error is
obvious.
THANK YOU!
KaHa242
I suggest two immediate changes to the category content display:
1) That, when a category contains less than 200 pages (arbitrary number),
we use a vertical display rather than a horizontal one, i.e.
Bohr, Niels
Einstein, Albert
Turing, Alan
Zuse, Konrad
In the long term we'll want neat things like paging, splitting by letter
etc. And of course the most requested feature is showing all categories
and articles in subcategories on a single page (tree-view), but I won't
code that one.
2) That the sorting criteria are actually used as the page title in the
category display, so that [[Category:Albert Einstein|Einstein, Albert]] is
rendered as such. It should be possible to override this, maybe using the
syntax [[Category:Albert Einstein|:Einstein, Albert]].
Any objections?
Regards,
Erik
I just returned from vacation, and find the new software/skin in place.
Great! And thanks to everyone involved.
However...
* There is no longer a "section edit" link for the very first section
(before the first heading).
* In Mono, I cannot turn off the underlined links. When I'm not logged
in, links are not underlined, as I prefer. Logged in, they always are
(ugh!), and fiddling with the "underline links" setting in my
preferences doesn't help. Strangely, on de.wikipedia, I don't get the
underlines even when logged in.
Anyone up for some hotfixes? :-)
Magnus
The querycache table stores up to 1000 items from the miser'd special
pages (currently Ancientpages, Deadendpages, Lonelypages, Longpages,
Popularpages, Shortpages, Wantedpages) so that display of the special
pages can be done quickly with pre-cached data and nice, navigable
display. (You can change the number of items shown and scroll through,
and wantedpages that are created ought to show as such though i've not
tested this.)
The good: quick response and reasonable display of pages like:
http://de.wikipedia.org/wiki/Spezial:Wantedpageshttp://es.wikipedia.org/wiki/Especial:Lonelypageshttp://meta.wikipedia.org/wiki/Special:Ancientpages
The bad: it takes a long time to rebuild all the caches for all the
wikis. I've been running it for an hour and it's only up to 'fa' (doing
wikipedia and wiktionary interleaved). For en.wikipedia it's
sufficiently slow that they back up and really slug things down; I ended
up canceling most of the runs on enwiki to keep things running smoothly.
The script to run the rebuilding is recache-all-specials in
/home/wikipedia/bin (it's just a big mass URL hitter.)
So, while reading from the querycache is *great*, we could still use
some improvements on making the original queries more efficient.
Wantedpages and Lonelypages are particularly nasty on superhuge wikis.
-- brion vibber (brion @ pobox.com)
>
>Ariel has crashed a couple times in the last day so it's not
>reliable at
>present. Might be hardware, might be software. Needs testing. Will
>probably use another machine as a slave.
>
We dont have another server with enough space to be used as slave....
Shaihulud
We had a serious MySQL crash on suda with associated data corruption today
(6/6). There's a summary of the events leading up to the crash at
http://openfacts.berlios.de/index-en.phtml?title=Wikipedia_plans
Whether it was the kill -9 that led to the corruption or whether the
database was already corrupted and did therefore not respond we do not
know; in any case, there seem to have been no alternatives to killing it
(people on #mysql knew nothing either).
Shaihulud made a copy of the CUR tables from all wikis earlier today and
imported it on Ariel. We've switched the live wikis to readonly from
Ariel; readonly because Ariel doesn't have OLD and lots of other stuff,
because it's not sufficiently tested, and because we'd like to prevent any
data loss if possible.
Tim created a special "readonly" user on ariel for this purpose.
The following have been changed as long as we are in readonly mode:
1) Counters disabled on all wikis
2) Linkscc disabled
3) readonly file set to /home/wikipedia/common/readonly
4) user_newtalk disabled
5) $wgDatabaseServer and $wgDBuser changed to ariel
There will still be lots of error messages and because the OLD tables are
not on Ariel revision histories are missing etc. This is *only* to make
sure that people can read our articles.
The next step is to fix the data corruption on suda.
Regards,
Erik
U.S. Dollar (Primary): $4,330.40 USD
Canadian Dollar: $195.05 CAD
Euro: �1,646.27 EUR
Pound Sterling: �560.10 GBP
Yen: �5,239 JPY
Current Total in U.S. Dollars: $7501.42 USD
Total of of 28 May 2004: $4662.08 USD
Difference: $2839.24 USD
Also, could a developer include a donations link at the end of the 'Database is
read only ...' message? That should increase the donation rate.
-- Daniel Mayer (aka mav)
__________________________________
Do you Yahoo!?
Friends. Fun. Try the all-new Yahoo! Messenger.
http://messenger.yahoo.com/
Heya all, everything appeared to go well on the web-install, however no
LocalSettings.php was created in /config.
So I thought I'd reload the page, on the thought that I had interrupted
it (although I didn't notice the page still (loading).
At any rate, this is what I got:
* PHP 4.3.6 ok
* PHP server API is cgi; using ugly URLs (index.php?title=Page_Title)
* Have zlib support; enabling output compression.
* Found GD graphics library built-in, image thumbnailing will be
enabled if you enable uploads.
* Installation directory: /homepages/22/d94944393/htdocs/yeago.net/wiki
* Script URI path: /wiki
* MySQL error 1045: Access denied for user:
'root(a)infong223.kundenserver.de' (Using password: NO)
* Trying regular user... ok.
* Connected to database... 4.0.17-standard-log; enabling MySQL 4
enhancements
* Database db95048066 exists
* There are already MediaWiki tables in this database. Checking if
updates are needed...
Updating ipblocks table... Query "ALTER TABLE ipblocks ADD ipb_auto tinyint(1) NOT NULL default '0', ADD ipb_id int(8) NOT NULL auto_increment, ADD PRIMARY KEY (ipb_id)" failed with error code "Table 'db95048066.ipblocks' doesn't exist".
So, there may be two problems:
1) LocalSettings.php not being created
2) ipblocks table not being created
Could someone point me in the right direction?
Thanks,
-Steve