Andrew Whitworth schreef:
On Jan 16, 2008 8:18 PM, Majorly axel9891@googlemail.com wrote:
On 17/01/2008, Matthew Britton matthew.britton@btinternet.com wrote:
Sounds good. Who's getting the 'bigdelete' permission, stewards?
People have suggested bureaucrats - a poor idea, that.
Don't give the permission to anybody. A better idea is to fix big deletions so that they don't bork the server when invoked, and then remove the 'bigdelete' permission entirely. Flag a page as being "in dispose" while a background process slowly grinds through the deletions. Ideally, this situation won't come up too frequently.
Wouldn't it be possible to change the way deletions work? Currently, when a page is deleted, the following happens:
- The related entry is deleted from the page table (1 row affected) - All corresponding rows in the revision table are copied (using INSERT SELECT) to the archive table (N rows affected, fetching of an additional N rows required) - The revision table rows are then deleted (N rows affected) This adds up to a total of 2N+1 rows being inserted/deleted and another N rows being selected.
Two improvements could be made: - Finally start using the rev_deleted field rather than the archive table. This changes the INSERT SELECT to a UPDATE WHERE on the revision table. This only affects N rows rather than 2N, and doesn't require SELECTing any rows. - Delete the page table entry immediately (making the page and its revisions invisible), and schedule moving/rev_deleting the revisions in the job queue. This will severely reduce the load of a delete request, but will delay the old revisions showing up in the undelete pool (the "undelete N deleted revisions?" link), making it hard/impossible to undelete a page shortly after deleting it. A solution could be to move/rev_delete the first revision immediately (i.e. right after deleting the page table entry) as well, so at least the most recent revision can be undeleted.
Roan Kattouw (Catrope)