Nick Reinking wrote:
> Lee Daniel Crocker wrote:
> > Jason Dreyer wrote:
> > > You can try a different file system or block size. XFS for Linux
> > > is improving. You may want to compare it to ReiserFS. If you are
> > > going to test different block sizes for the db...
> >
> > I'm a big fan of ReiserFS in general. That's what the MySQL folks
> > recommend as well, and I run that at Piclab (which is a small machine
> > but runs the testsuite faster than my Compaq). I'm not sure that block
> > sizes are that flexible for Resier, but I'll look into it. At any rate,
> > it would be good to find an optimal arrangement for the database
> > before we get the new server to install it on.
> > http://www.wikipedia.org/mailman/listinfo/wikitech-l
>
> AFAIK, ReiserFS block sizes are stuck at 4KB unless someone
> changed that
> while I wasn't looking.
XFS for Linux 1.2 on x86 supports a maximum of 4K, equal to the page size of
the x86 kernel. XFS supports a minimum block size of 512 bytes, but I doubt
a smaller block size would improve db performance. So.. a block size
performance comparison for Wikipedia is probably off in the more distant
future, when larger block sizes are supported on x86 systems or the if db
moved to IA-64.
Lee Daniel Crocker wrote:
> Absolutely; lies, damned lies, and benchmarks, and all that.
Improving benchmarks which apply to Wikipedia db will hopefully improve the
situation out "in the wild". So yeah, the rest is just lies.
> Disk I/O may well be a major culprit. Memory/CPU usage probably
> isn't. I'll also run some tests for things like having the database
> on a separate machine
> ...
> I'd lso appreciate suggestions for other benchmarks (specific
> MySQL settings, for example).
Even if your system has plenty of memory, MySQL may not be configured to use
it. What do your settings in my.cnf look like? These settings will also
differ for MyISAM and InnoDB tables.
Improving disk throughput usually translates -> new hardware. You can try a
different file system or block size. XFS for Linux is improving. You may
want to compare it to ReiserFS. If you are going to test different block
sizes for the db, partition accordingly with the db on a separate partition
from the OS, Apache, PHP and MySQL binaries. This way, you can leave the
binary partitions at a smaller block size and adjust the db partition
without affecting the others. When installing your db on a second machine do
the same; isolate your binaries from your data.
Monitoring with mytop could be interesting:
http://jeremy.zawodny.com/mysql/mytop/
Lee Daniel Crocker wrote:
> Now that I have the test suite working and installation is quick,
> I set up the software on a freshly-installed machine on my home
> network, ran the suite, reinstalled using InnoDB tables instead of
> MyISAM, ran again, installed MySQL 4.0.12, and ran again.
>
> The semi-bad news: there didn't seem to be any difference in
> performance with any of these changes. The variance in timing
> among setups wasn't much more than the variance from one run to
> the next. The actual numbers are below. Probably the most
> important numbers are the "sec per fetch" and "sec per search"
> at the end--those are the timings of regular page fetches and
> searches done by background threads that run during the
> conformance tests and best simulate actual use.
The differences between MySQL versions and table types may not be the
determining factor in performance here. Inconsequential test results could
indicate a performance bottleneck on your test system. Disk throughput,
available RAM or other could be limiting all test configurations.
For Example:
-If maximum disk throughput on your test system is 18 Mbytes/sec, all
configurations may produce similar results at this level.
-Increase the disk throughput to 33 Mbytes/sec. At this level, configuration
#1 may outperform configuration #2 because it is capable of taking advantage
of the increased disk throughput. Configuration 2# may reach maximum
performance at 28 Mbytes/sec with little to no improvement at 33 Mbytes/sec.
The performance of configuration #1 could taper higher than 33 Mbytes/sec,
say 39 Mbytes/sec.
Or on the other hand, your message indicates that default MySQL
configurations were used. The default configuration options may not be
taking advantage of the resources available on your test system. The next
step could be adjusting these configurations to optimize the use of
available resources.
The fact that Wikipedia can be installed on various configurations and see
similar results, is good. Because, it provides a solid baseline for
performance measurement.
BTW, this is my first post to the list and I wanted to note and thank all of
you for the excellent work this project has produced. We are testing the
Wikipedia engine for use as a team knowledgebase. I know there are other
engines that may be more suitable for this, but it was hard to pass up the
combination of features included in Wikipedia.
Thank you.
-- Jason Dreyer
Hello,
I found out (at german wiki) that in the 'special:preferences' is a
'translation error', but I looked at meta, and found the same missleading
sentence.
The Option 'Number of titles in recent changes:' (technically modifing
$wpRecent/$rclimit) is affecting more than 'recent changes'. It affects all
(?) generated lists.
This messages has a cc to wikipedia-l, if other languages will think about
a better 'translation' for their wiki's, too.
Smurf
--
------------------------- Anthill inside! ---------------------------
Hello,
this patch corrects a type in LanguageDe.php.
The typo exists in the phpwiki tree as well.
Smurf
--
------------------------- Anthill inside! ---------------------------
316c316
< Bitte melden Si sich an, sobald Sie es erhalten.",
---
> Bitte melden Sie sich an, sobald Sie es erhalten.",
Now that I have the test suite working and installation is quick,
I set up the software on a freshly-installed machine on my home
network, ran the suite, reinstalled using InnoDB tables instead of
MyISAM, ran again, installed MySQL 4.0.12, and ran again.
The semi-bad news: there didn't seem to be any difference in
performance with any of these changes. The variance in timing
among setups wasn't much more than the variance from one run to
the next. The actual numbers are below. Probably the most
important numbers are the "sec per fetch" and "sec per search"
at the end--those are the timings of regular page fetches and
searches done by background threads that run during the
conformance tests and best simulate actual use.
The semi-good news is that MySQL 4.0.12 instaled easily and
worked out of the box with no problems, and seems as reliable
as its now "production" status would indicate, and didn't have
any performance problems, so it seems there would be no
downside to using it if we decided to upgrade to take advantage
of its features.
MyISAM:
Test "Links" Succeeded (120.817 secs)
Test "HTML" Succeeded (321.443 secs)
Test "Editing" Succeeded (229.574 secs)
Test "Parsing" Succeeded (23.135 secs)
Test "Special" Succeeded (124.010 secs)
Test "Search" Succeeded (33.702 secs)
Test "Math" Succeeded (49.452 secs)
Stopped background threads.
Fetched 213 pages in 784.356 sec (3.682 sec per fetch).
Performed 201 searches in 397.350 sec (1.865 sec per search).
Total elapsed time: 0 hr, 16 min, 41.367 sec.
InnoDB:
Test "Links" Succeeded (113.099 secs)
Test "HTML" Succeeded (247.384 secs)
Test "Editing" Succeeded (175.459 secs)
Test "Parsing" Succeeded (16.881 secs)
Test "Special" Succeeded (159.286 secs)
Test "Search" Succeeded (45.763 secs)
Test "Math" Succeeded (60.805 secs)
Stopped background threads.
Fetched 194 pages in 721.915 sec (3.721 sec per fetch).
Performed 192 searches in 343.591 sec (1.771 sec per search).
Total elapsed time: 0 hr, 15 min, 20.568 sec.
MySQL 4.0.12:
Test "Links" Succeeded (114.171 secs)
Test "HTML" Succeeded (258.449 secs)
Test "Editing" Succeeded (212.278 secs)
Test "Parsing" Succeeded (21.764 secs)
Test "Special" Succeeded (131.613 secs)
Test "Search" Succeeded (31.383 secs)
Test "Math" Succeeded (52.241 secs)
Stopped background threads.
Fetched 201 pages in 748.631 sec (3.725 sec per fetch).
Performed 200 searches in 350.369 sec (1.743 sec per search).
Total elapsed time: 0 hr, 15 min, 51.312 sec.
--
Lee Daniel Crocker <lee(a)piclab.com> <http://www.piclab.com/lee/>
"All inventions or works of authorship original to me, herein and past,
are placed irrevocably in the public domain, and may be used or modified
for any purpose, without permission, attribution, or notification."--LDC
Hello,
I din't find it on the donload page, so I ask here:
Are you dumping using the -e option of mysqldump?
Smurf
--
------------------------- Anthill inside! ---------------------------
Heya,
I've added a list of sites using the Wikipedia software to our page at
http://wikipedia.sourceforge.net
Let me know if I missed one.
Regards,
Erik
Hi!
The Wikipedia SF Homepage at http://wikipedia.sourceforge.net/
is missing a link to the Download Page
(http://sourceforge.net/project/showfiles.php?group_id=34373)
Maybe this page should look more like wikipedia?
Just a wee bit off-topic, so sorry :-)
Cheers
Leo
> --__--__--
>
> Message: 7
> Date: Mon, 14 Apr 2003 18:57:40 -0500
> From: Lee Daniel Crocker <lee(a)piclab.com>
> To: wikitech-l(a)wikipedia.org
> Subject: [Wikitech-l] New phase3 code reorganization
> Reply-To: wikitech-l(a)wikipedia.org
>
>
> The latest phase3 code has been significantly reorganized, and so
> it has been imported into a new CVS module "phase3" instead of
> "phpwiki/newcodebase". I will also make a .zip file release for
> those who don't want to play with CVS.
>
> I think all the docs are updated; if you know of any stray docs
> somewhere, please update them to reflect this change.