![]()
Catalog MaintenanceWithout proper setup and maintenance, your Catalog may continue to grow indefinitely as you run Jobs and backup Files. How fast the size of your Catalog grows depends on the number of Jobs you run and how many files they backup. By deleting records within the database, you can make space available for the new records that will be added during the next Job. By constantly deleting old expired records (dates older than the Retention period), your database size will remain constant.If you started with the default configuration files, they
already contain reasonable defaults for a small number
of machines (less that 5), so if you fall into that case,
catalog maintenance will not be urgent if you have a few
hundred megabytes of disk space free. Whatever the case
may be, some knowledge of retention periods will be useful.
The File Retention and the Job Retention are specified in each Client resource as is shown below. The Volume Retention period is specified in the Pool resource, and the details are given in the next chapter of this manual.
Compacting Your MySQL DatabaseOver time, as noted above, your database will tend to grow. I've noticed that even though Bacula regularly prunes files, MySQL does not effectively use the space, and instead continues growing. To avoid this, from time to time, you must compact your database. Normally, large commercial database such as Oracle have commands that will compact a database to reclaim wasted file space. MySQL has the OPTIMIZE TABLE command that you can use, and SQLite version 2.8.4 and greater has the VACUUM command. We leave it to you to explore the utility of the OPTIMIZE TABLE command in MySQL.All database programs have some means of writing the database out in ASCII format and then reloading it. Doing so will re-create the database from scratch producing a compacted result, so below, we show you how you can do this for both MySQL and SQLite. For a MySQL database, you could write the Bacula database as an ASCII file (bacula.sql) then reload it by doing the following: mysqldump -f --opt bacula > bacula.sql mysql bacula < bacula.sql rm -f bacula.sqlThere is no need to explicitly delete the old database as MySQL will automatically do so in recreating the database. Depending on the size of your database, this will take more or less time and a fair amount of disk space. For example, if I cd to the location of the MySQL Bacula database (typically /opt/mysql/var or something similar) and enter: du baculaI get 620,644 which means there are that many blocks containing 1024 bytes each or approximately 635 MB of data. After doing the msqldump, I had a bacula.sql file that had 174,356 blocks, and after doing the mysql command to recreate the database, I ended up with a total of 210,464 blocks rather than the original 629,644. In other words, the compressed version of the database took approximately one third of the space of the database that had been in use for about a year. As a consequence, I suggest you monitor the size of your database and from time to time (once every 6 months or year), compress it. Compacting Your SQLite DatabaseFirst please read the previous section that explains why it is necessary to compress a database. SQLite version 2.8.4 and greater have the Vacuum command for compacting the database.cd working-directory echo 'vacuum' | sqlite bacula.dbAs an alternative, you can use the following commands, adapted to your system: cd working-directory echo '.dump' | sqlite bacula.db > bacula.sql rm -f bacula.db sqlite bacula.db < bacula.sql rm -f bacula.sqlWhere working-directory is the directory that you specified in the Director's configuration file. Note, in the case of SQLite, it is necessary to completely delete (rm) the old database before creating a new compressed version. Backing Up Your Bacula DatabaseIf ever the machine on which you Bacula database crashes, and you need to restore from backup tapes, one of your first priorities will probably be to recover the database. Although Bacula will happily backup your catalog database if it is specified in the FileSet, this is not a very good way to do it because the database will be saved while Bacula is modifying it. Thus the database may be in and instable state. Worse yet, you will backup the database before all the Bacula updates have been applied.To resolve these problems, you need backup the database after all the backup jobs have been run. In addition, you will want to make a copy while Bacula is not modifying it. To do so, you can use two scripts provided in the release make_catalog_backup and delete_catalog_backup. These files will be automatically generated along with all the other Bacula scripts. The first script will make an ASCII copy of your Bacula database into bacula.sql in the working directory you specified on your configuration, and the second will delete the bacula.sql file. The basic sequence of events to make this work correctly is as follows:
# Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" Type = Backup Client=rufus-fd FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" Storage = DLTDrive Messages = Standard Pool = Default RunBeforeJob = "/home/kern/bacula/bin/make_catalog_backup" RunAfterJob = "/home/kern/bacula/bin/delete_catalog_backup" } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup Run = Full sun-sat at 1:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include = signature=MD5 { @working_directory@/bacula.sql } } Backing Up Third Party DatabasesIf you are running a database in production mode on your machine, Bacula will happily backup the files, but if the database is in use while Bacula is reading it, you may back it up in an unstable state.The best solution is to shutdown your database before
backing it up, or use some tool specific to your database
to make a valid live copy perhaps by dumping the database
in ASCII format. I am not a database expert, so I cannot
provide you advice on how to do this, but if you are unsure
about how to backup your database, you might try
visiting the Backup Central site, which has been renamed
Storage Mountain (www.backupcentral.com). In particular, their
Free Backup and Recovery Software page has links to
scripts that show you how to shutdown and backup most major
databases.
For example, suppose you do a backup of two systems, each with 100,000 files. Suppose further that you do a Full backup weekly and an Incremental every day, and that the Incremental backup typically saves 4,000 files. The size of your database after a month can roughly be calculated as: Size = 154 * No. Systems * (100,000 * 4 + 10,000 * 26)where we have assumed 4 weeks in a month and 26 incremental backups per month. This would give the following: Size = 154 * 2 * (100,000 * 4 + 10,000 * 26) or Size = 308 * (400,000 + 260,000) or Size = 203,280,000 bytesSo for the above two systems, we should expect to have a database size of approximately 200 Megabytes. Of course, this will vary according to how many files are actually backed up. Below are some statistics for a MySQL database containing Job records for five Clients beginning September 2001 through May 2002 (8.5 months) and File records for the last 80 days. (Older File records have been pruned). For these systems, only the user files and system files that change are backed up. The core part of the system is assumed to be easily reloaded from the RedHat rpms. In the list below, the files (corresponding to Bacula Tables) with the extension .MYD contain the data records whereas files with the extension .MYI contain indexes. You will note that the File records (containing the file attributes) make up the large bulk of the number of records as well as the space used (459 Mega Bytes including the indexes). As a consequence, the most important Retention period will be the File Retention period. A quick calculation shows that for each File that is saved, the database grows by approximately 150 bytes. Size in Bytes Records File ============ ========= =========== 168 5 Client.MYD 3,072 Client.MYI 344,394,684 3,080,191 File.MYD 115,280,896 File.MYI 2,590,316 106,902 Filename.MYD 3,026,944 Filename.MYI 184 4 FileSet.MYD 2,048 FileSet.MYI 49,062 1,326 JobMedia.MYD 30,720 JobMedia.MYI 141,752 1,378 Job.MYD 13,312 Job.MYI 1,004 11 Media.MYD 3,072 Media.MYI 1,299,512 22,233 Path.MYD 581,632 Path.MYI 36 1 Pool.MYD 3,072 Pool.MYI 5 1 Version.MYD 1,024 Version.MYIThis database has a total size of approximately 450 Megabytes. If we were using SQLite, the determination of the total database size would be much easier since it is a single file, but we would have less insight to the size of the individual tables as we have in this case.
|