On Wed, Jul 23, 2008 at 10:33 AM, Sheldon Rampton
<sheldon(a)prwatch.org> wrote:
Can anyone here give me some suggestions for the
best way to back up
the database for our website? Originally we used a cron job that
did a
MySQL dump every evening, but as the size of our wiki has grown, the
time needed just to run the dump has grown. It now takes 30-45
minutes, during which time site performance bogs down. We've
therefore
tried doing live MySQL replication instead to maintain a mirror of
the
database, but that doesn't seem to work very well.
Is there some other approach that we should be trying?
There are lots of potential approaches.
First - what OS is your database server, and what type of hardware is
it running on?
Second - what are the problems you are seeing with the live
replication?
Third - do you have any sort of system performance trace data for
what's going on during the period it's doing the current MySQL dumps?
Sar, top, vmstat, iostat, etc.
This is an example of a generic performance sizing problem a lot of
people miss when architecting systems. A system needs to perform
acceptably well during its "degraded" periods, including during
backups, during a RAID disk failure (RAID-5 parity operation with a
disk out, any RAID while bringing in a replacement or hot spare
drive), etc. Sizing for that is more tricky.
--
-george william herbert
george.herbert(a)gmail.com
Take a look at Jim Hu's recent thread on this list. Two relevant
messages from the thread: