On Sat, Feb 01, 2003 at 01:14:00PM +0100, Erik Moeller wrote:
That is correct, and you could be right about this (I don't know what the current database schema looks exactly like, but last time I was involved this was still ok.) But keep in mind that a composite index also adds a lot of overhead for updates, takes up more room in the index cache, and doesn't really add much speed if the extra columns don't have a high selectivity, i.e., filter out many unwanted records.
True. However, I suspect with some of our queries, our existing single- field index only has very low selectivity, which makes stuff like "Most wanted" and "History" so slow.
Adding composite indexes won't help there. What you need is an extra redundant table that contains the number of wanted links.
- The HANDLER command. HANDLER, which works with both InnoDB and MyISAM,
provides low-level access to tables *without locking*. You get reads, but you don't get consistency, which may cause problems when you're dealing with critical, heavily modified data. But it seems perfect for our archive stuff.
If there are not many updates then this accomplishes next to nothing, except that you have made the software more complicated.
HANDLER works almost exactly the same as SELECT from a syntax POV.
That's only part of the problem. If anyone wants to mess with this they have to understand what it does and why. That doesn't go away if you have similar syntaxes.
- Upgrading to MySQL4.
Actually I was once in favour of that but Lee convinced me that as long as the developers of MySQL don't think it is ready for production, we shouldn't either. There is only one thing more important than speed and that is stability.
They are currently labeling it as a "gamma". I think it might be worth giving a try, we can always downgrade if it creates problems.
Not if we start using MySQL4 stuff and the problems only start to appear after a while. I agree that the risc is very small, but still.
- Caching our article index. I have mentioned this idea before, I think:
We could just keep our entire article index in memory *persistently* (currently the linkcache is filled for each request), which should make link lookup almost instant.
What makes you think it isn't already?
I have done tests on the link lookup with a local database and it's very slow with link-heavy pages. This is also what our profiling has shown.
Could you be a bit more specific about that?
I suspect the biggest part of the involved tables is in the cache anyway,
I don't trust automatic caching, especially since our CUR table is huge and it is only a very small subset of it (the titles) which we need in memory.
For an existence query it only needs the index, not the table itself. Did you check if we are doing something silly here so that the database has to access the table anyway? What does EXPLAIN say here?
-- Jan Hidders
.---------------------------------------------------------------------. | Post-doctoral researcher e-mail: jan.hidders@ua.ac.be | | Dept. Math. & Computer Science tel: (+32) 3 820 2427 | | University of Antwerp, room J1.06 fax: (+32) 3 820 2421 | | (UIA) Universiteitsplein 1, B-2610 Wilrijk - Antwerpen (Belgium) | `---------------------------------------------------------------------'