On Dec 10, 2003, at 17:06, Erik Moeller wrote:
Brion-
Comments? Ideas? Complaints?
Possible performance drawbacks when dealing with huge MySQL tables?
The revisions table would be about the size of the present old table. It would have some more rows, but would no longer have the overhead of duplicate title information. Meanwhile, some functions that deal with page titles but don't care about their contents (whatlinkshere, allpages, orphans, etc) may well be faster by having a smaller page table instead of the text-bloated cur.
Limits?
The maximum size for an InnoDB table (as the total InnoDB table space) is apparently 4 billion pages, with a default page size of 16kb. That's 64 terabytes.
As the wiki is edited more and more, the relative number of current revisions compared to old revisions will continue to shrink; storage-related limits wouldn't be significantly different than with the present schema.
More time needed to fix corruptions? (If a corruption occurs in CUR, fixing that table should be enough. If it occours in the big ass table for all pages, prepare to wait.)
And if there's corruption in the big old table in the old schema? Or the smaller page table in the new schema?
More difficult backup+table export (for those who want just CUR and not OLD), but that can be dealt with.
It's already inconvenient for anyone who wants regular updates, there's a lot of work can be done on that front. A format that can be sensibly read and organized rather than a big SQL statement dump might be useful.
-- brion vibber (brion @ pobox.com)