This is a read-only archive. Find the latest Linux articles, documentation, and answers at the new!


Backing up your MySQL data

By Mayank Sharma on December 27, 2004 (8:00:00 AM)

Share    Print    Comments   

If you have ever pulled your hair out in frustration over data loss, no doubt the word 'backup' has special meaning in your life. Databases offer a nice way to catalog data, but with the amount of data being trusted into MySQL databases these days, the after-effects of an unwise DROP DATABASE command, an unlucky system crash, or a failed hand-edit of the table structure are catastrophic and can be unrecoverable -- unless you have a backup to restore from.

MySQL has a built-in command line utility called mysqldump that works with the MySQL server to create platform-independent text files full of the database's content.

To backup a database called sample_db, issue this command:

# mysqldump -u <username> -p <password> -h <host> sample_db > /usr/backups/mysql/sample_db.2004-12-16

MySQL uses the username, password, and host to check whether you have access privileges to the database. After checking your authenticity, MySQL directs the output of the mysqldump command to the file and location specified. It is a common practice to 'tag' the output file with a date. In some cases where the backups are done multiple times in a day, the time information is also tagged.

Note: I won't show the -u, -p and -h switches in future examples of the mysqldump command in order to make the commands easier to read and understand. You'll have to include them while testing any of the commands in this article, though, or else MySQL will complain.

If there are certain tables in a database that have been more recently updated than others, mysqldump allows you to dump only them:

# mysqldump sample_db articles comments links > /usr/backups/mysql/sample_db.art_comm_lin.2004-12-16

This will back up the articles, comments, and links tables in the database. The output file has been named accordingly. This feature comes in handy when dealing with large databases (e.g. a content management system). There are also certain tables that could be grouped under one section to create smaller, more easily managed backup files. They can be given a section name, as in the following example:

# mysqldump sample_db player score season > /usr/backups/mysql/sample_db.playerinfo.2004-12-16

To save space, you can compress the backup file by piping it through gzip:

# mysqldump sample_db | gzip > /usr/backups/mysql/sample_db.2004-12-16.gz

Network dumps

Keeping the backup on the same system as the one which has the original copy is like flashing your secret weapon to the enemy. One rudimentary way of keeping backups on another box on the network is to copy the backup files manually. But this is the digital age, dude -- automate!

For this example, we're going to assume that the IP address of the main machine is, and the IP address of the remote machine is

It is better to create a separate partition (in this example we're calling it 'archive') on the remote machine and mount it. This way even if the operating system is rendered useless on the remote machine, you can install a fresh one without worrying about the backup files.

For backing up on a Linux machine, you must have the Network File System (NFS) set up. Read Understanding NFS and Implementing NFS for help in getting NFS running. For backing up on a Windows machine, you'll need Samba installed and configured. The Samba page on and the Setting up Samba tutorial should help you with this.

Assuming you have NFS set up, open the /etc/exports file in a text editor and add this string:

/archive (rw, no_root_squash)

We are telling NFS to share the /archive directory with the system that has the database. This directory has read-write permissions and the root user connecting from the database machine will have root privileges on the remote machine. After saving the file, type this command:

# exportfs -a -r

This re-exports all directories as specified in the /etc/exports file. See man exportfs for details.

Then restart the NFS service:

/etc/rc.d/init.d/nfs start

This sets up our remote machine. On the machine running MySQL, create a backup_share directory under /mnt with this command:

mkdir /mnt/backup_share

And then mount the remote archive folder onto that directory:

mount \u2013o soft /mnt/backup_share

This will mount the folder. Now you can create the backups in the mounted remote machine directory. To backup the sample_db onto the remote machine:

# mysqldump sample_db > /mnt/backup_share/sample_db.2004-12-16

If you're backing up to a remote Windows machine, create a folder called archive and share it with read and write permissions. Next, create a backup_share directory under /mnt on your database server as in the previous section and mount it:

# mount -t smbfs -o username=<username>, password=<password> // /mnt/backup_share

Replace <username> and <password> with the information required to access the share. Lastly, create the backup (sample_db) on the mounted share:

# mysqldump sample_db > /mnt/backup_share/sample_db.2004-12-16

Automating the process

Now that we know how to back up databases and tables and how to keep the backups at remote locations, it's time we let Linux handle it for us. We'll use the cron daemon for this. cron is a Linux program that runs background tasks at specified intervals. The cron daemon wakes up once per minute and checks a crontab file to see if it has any tasks, and if so, it executes them.

We'll write a very simple script to take backups and then schedule it using cron. Open a text editor and copy the script below into a new file:

## If you are backing up on Linux, uncomment the line below
# mount \u2013o soft /mnt/backup_share

## If you are backing up on Windows, uncomment the line below and fill in the username and password
# mount -t smbfs -o username=<username>, password=<password> // /mnt/backup_share

## At the very end the $(date +%F) command will append the current date information
mysqldump -u <username> -p <password> -h <hostname> sample_db > /mnt/backup_share/sample_db.$(date +%F)

#un-mount the filesystem
umount /mnt/backup_share

Save the file as

This is a quick and untidy script. In a proper deployment you would first check whether the remote partition has been mounted, whether the backup file has successfully traveled to the remote system, and other such things.

Give executable permission to the script:

chmod +x ./

Now comes the cron part. cron gives you four options to run the script: hourly, daily, weekly, and monthly. Depending on which you choose, you'll need to copy the script to its directory in /etc/cron.timeperiod, where timeperiod is the interval at which you want to perform the backup. I'll copy the script under /etc/cron.daily since I want daily backups. When you're done, restart the cron daemon:

/etc/rc.d/init.d/crond restart

And that's it -- your backup will execute at 4:02 a.m. each day. This is the default time for files under the /etc/daily directory as specified in the /etc/crontab file. See the Wikipedia page on crontab for more information.

Using the dumps

The backup files created by mysqldump are simple text files that have various CREATE TABLE and INSERT statements that restore the database. Here's a sample of an outfile file:

-- MySQL dump 8.23
-- Host: localhost Database: geeklog
-- Server version 3.23.58

-- Table structure for table `gl_commentcodes`

CREATE TABLE gl_commentcodes (
code tinyint(4) NOT NULL default '0',
name varchar(32) default NULL,

-- Dumping data for table `gl_commentcodes`

INSERT INTO gl_commentcodes VALUES (0,'Comments Enabled');
INSERT INTO gl_commentcodes VALUES (-1,'Comments Disabled');

To restore your database from this file, you'll first need to create an empty database. To populate that database with tables and data, select the backup file you want to use:

mysql -u <username> -p <password> -h <hostname> sample_db < /mnt/backup_share/sample_db.2004-12-16

Fill the authentication information above, and presto! There's your database as it was when you last saw it.


mysqldump is an important tool for a MySQL database administrator, and an excellent solution for maintaining data integrity and 100% availability.

Mayank Sharma is a freelance technology writer and FLOSS migration consultant in New Delhi, India.

Share    Print    Comments   


on Backing up your MySQL data

Note: Comments are owned by the poster. We are not responsible for their content.

Small bug?

Posted by: Anonymous Coward on December 27, 2004 08:51 PM
In the last code line of the article, I think that the file <tt> sample_db.2004-12-16</tt> should be directed into <tt>mysql</tt> instead of writing the output of it into the backup file. The line as it stands now could delete the file!

The correct line would look like this (the part before<nobr> <wbr></nobr><tt>/mnt/</tt> is a less than sign instead of greater than):
<TT>mysql -u<username> -p<password> -h<hostname> sample_db <<nobr> <wbr></nobr>/mnt/backup_share/sample_db.2004-12-16</TT>


Re:Small bug?

Posted by: Jem Matzan on December 27, 2004 09:21 PM
Oops -- good catch. That one was my fault, not the author's. I had to translate his text into HTML and accidentally did the wrong sign there. It's now fixed.



Re:Small bug?

Posted by: Mayank Sharma on December 27, 2004 09:39 PM
Thanks Jem for the correction (and the translation<nobr> <wbr></nobr>:)



Re:Small bug?

Posted by: Anonymous Coward on December 27, 2004 09:28 PM
Your right, that would overwrite the backup file with the output of the mysql command. That needs to be corrected.


why NFS?

Posted by: mweisman on December 28, 2004 12:57 AM
In my experience, NFS is not absolutly necessary to backup mysql over a network. Without much effort, I back up my remote mysql database over the internet, with no problem. As long as TCP wrappers and/or your firewall configured properly, you don't need NFS. If you are doing the back up over an internal network, maybe NFS is better, but I have always been wary of using NFS over the internet.

Just a thought.



Posted by: Anonymous Coward on December 28, 2004 01:46 AM
why would you use nfs? if, for example, an attacker does manage to get a shell on your computer, having your backups over nfs would leave them wide open to an attacker. the only authentication nfs performs is on a host basis. scp would be a much better way to go. while it's definitely not perfect, it would be a much better idea to setup a ssh key, and use ssh-agent for the key's password. you may have to type in the password after a reboot or something, but it's definitely a better solution than using nfs.



Posted by: Anonymous Coward on December 28, 2004 02:40 AM
Rsync is your friend...



Posted by: Anonymous Coward on December 29, 2004 04:29 AM
Agreed. Having it in my crontab for opening the service doing the rsync and closing it again. (this behaviour is for saving my server resources and paranoid security)


backing up to FTP?

Posted by: Anonymous Coward on December 28, 2004 06:04 AM
How can one backup to ftp rather than nfs? That would also be useful...


Re:backing up to FTP?

Posted by: Anonymous Coward on December 29, 2004 05:35 AM
Pipe the output of mysqldump (or gzip if you're compressing) to curl. Works beautifully - I use it every day.


Automation? It's been written.

Posted by: Anonymous Coward on December 28, 2004 06:51 AM
This won't handle the networking aspects but it does nice rotated backups via a frontend to mysqldump.

<A HREF="" title=""></a>


Backing up databaases with UTF-8 data

Posted by: tim_qgis on December 28, 2004 06:21 PM
A short time ago I had a mysql database for a mambo web site I set up for a Chinese friend of mine. She painstakingly entered all her content in Chinese and the site worked fine. I made backups using mysqldump on a nightly basis. When the nevitable happened and I came to restore her database, all of the Chinese characters had turned into gobbldy gook<nobr> <wbr></nobr>:-( How can I get my backup to restore properly?

Also, wrt to network backup, much simpler to just run the backup as a cron job on the backup host and specify port / hostname to the database server when doing the database dump. Of course ssh / rsync is better in htat you can compress the backup down before passing it over the network....


Re:Backing up databaases with UTF-8 data

Posted by: Anonymous Coward on December 29, 2004 04:32 AM
Just a tip: Try with a nicely crafted php script?


What's wrong with mysqlhotcopy?

Posted by: Anonymous Coward on December 28, 2004 08:07 PM
I use "mysqlhotcopy" (requires perl-DBD-MySQL and perl-DBI to be installed) myself - it just takes a snapshot of the database (using locking) into another filestore tree (which could be on NFS if you wanted). To restore, shut down MySQL if it's still running, move the live tree (/var/lib/mysql) out of the way, re-install the MySQL RPMs if you need to (setting the DB root password), move the backup tree into the live tree and start MySQL again - not rocket science<nobr> <wbr></nobr>:-)

Here's my cron line:

<TT>0 0 * * *<nobr> <wbr></nobr>/usr/bin/mysqlhotcopy --regexp='.*' --allowold<nobr> <wbr></nobr>/backup/mysql >/dev/null 2>&1</TT>


Re:What's wrong with mysqlhotcopy?

Posted by: Anonymous Coward on December 28, 2004 11:11 PM
It is useless if you have InnoDB tables in your MySQL DB.


Re:What's wrong with mysqlhotcopy?

Posted by: Anonymous Coward on December 28, 2004 11:57 PM
One problem: mysqlhotcopy only supports ISAM and MyISAM tables. Won't work for InnoDB and such.


using SSH for network backups

Posted by: Anonymous Coward on December 28, 2004 09:53 PM
You can easily use SSH to do your network backups. I think rsync is a little bit of overkill for this.
If you create a restricted account on the remote machine, you may even consider to use a passwordless private key to log in. Then you could use cron to automate the backup.
I've also created a special user just for those backups (in mysql, of course). This allows you to have one user with SELECT rights on all db's to do a full backup.
Backup would be done like this:

$ mysqldump --all-databases -u -p | ssh @ 'cat > backupfile'

Works like a charm!


Re:using SSH for network backups

Posted by: Anonymous Coward on December 28, 2004 09:55 PM
aliens ate the username after -u and the passworddummy after -p and user and remote machine in the SSH command. Sorry!


Re:using SSH for network backups

Posted by: dazk on January 02, 2005 12:53 AM
This works the other way round as well. You can start the ssh connect on a remote machine and execute the mysqlbackup stuff. The output still goes through the ssh link and can be piped into a file on the remote machine.

Since the MySQL Dumps are plaintext, you might want to use -C to reduce traffic.


Additional backup

Posted by: walt-sjc on December 29, 2004 03:57 AM
One issue is that you still lose anything newer than the last backup. The binary log is helpful to recover the delta's if you lose the database for whatever reason. I roll (backup) my binary logs too so I can use the prior days backups and recover to any point in time - such as recovering from bozo mistakes (Oops, didn't mean to alter the table in the production DB....:-) The binary log doesn't help if your entire machine goes south...

I also keep my backups for several months as I have been in the situation where something happened to data and nobody noticed for weeks.

You can also setup a slave DB server on your backup box that has up-to-the minute changes, and you can fail over to it if needed.


just one question

Posted by: Anonymous Coward on December 29, 2004 09:04 PM
why not running mysqldump on the remote machine instead of exporting it via nfs ?


Backing up your MySQL data... remotely.

Posted by: Anonymous Coward on January 06, 2005 09:19 AM
After reading the first few paragraphs of this article, I was laughing. NFS and/or SMB *REQUIRED* in order to do backups over the network!? HAHAHA.

It bothered me so much, I felt the need to improve the article myself. See <A HREF="" title="">Backing up your MySQL data</a>.


This story has been archived. Comments can no longer be posted.

Tableless layout Validate XHTML 1.0 Strict Validate CSS Powered by Xaraya