26
DB2 9.7 LUW Full Online Backups Experiments and considerations (July 2012) José Raúl Barón Rodríguez DB2 9 LUW DBA CALCULO S.A.(Spain) Author:

DB2 9.7 LUW Full Online Backups - Experiments and Considerations

Embed Size (px)

DESCRIPTION

DB2 9.7 LUW Full Online Backups - Experiments and Considerations

Citation preview

Page 1: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

DB2 9.7 LUWFull Online Backups

Experiments and considerations (July 2012)

José Raúl Barón RodríguezDB2 9 LUW DBACALCULO S.A.(Spain)

Author:

Page 2: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

Table of contentsOnline Backups....................................................................................................................................3

Preliminary concepts........................................................................................................................3Recovering dropped tables.................................................................................................................16

To take into account ......................................................................................................................22Summary ............................................................................................................................................23APPENDIX A - bombardea.rexx source code....................................................................................25

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 2 / 26

Page 3: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

Online Backups

Preliminary conceptsOnline backups are those database backups that can be taken at the same time the database is doing its job. In order to make Online Backups, the database must be configured for linear logging. Same as it happens in circular logging, in linear logging the log buffer records (i.e. Log records that are still kept in memory) are taken to the primary log files BUT unlike the circular logging, the primary log files are NOT reusable under a linear logging configuration and once created will never be used again unless a restore / rollforward is needed.

Log files containing records associated to transactions not yet committed nor rolled back (aka In flight, unfinished) or log files not yet released into a complete log file are called ACTIVE LOG FILES and reside on the active ONLINE ARCHIVE LOG directory / device. Once completed they become OFFLINE ARCHIVE LOG FILES and can be moved automatically to another directory specific for them assigned to db cfg parameter LOGARCHMETH1 or LOGARCHMETH2.

One of the first things to do when preparing a database for Online Backups (besides creating it) is to configure primary and secondary log files in size and number, as well as to configure the route for archived log files. For the experiments contained in this document we will use the sample database DB2 creates through the db2sampl command.

db2startdb2sampl Creating database "SAMPLE"... Connecting to database "SAMPLE"... Creating tables and data in schema "DB2INST1"...

'db2sampl' processing complete.

Now we will modify the db cfg log related parameters to our convenience:

db2 get db cfg | grep -i log Log retain for recovery status = NO User exit for logging status = NO Catalog cache size (4KB) (CATALOGCACHE_SZ) = (MAXAPPLS*5) Log buffer size (4KB) (LOGBUFSZ) = 256 Log file size (4KB) (LOGFILSIZ) = 1000 Number of primary log files (LOGPRIMARY) = 3 Number of secondary log files (LOGSECOND) = 2 Changed path to log files (NEWLOGPATH) = Path to log files = /home/db2inst1/db2inst1/NODE0000/SQL00001/SQLOGDIR/ Overflow log path (OVERFLOWLOGPATH) = Mirror log path (MIRRORLOGPATH) = First active log file =. . .

mkdir /experiment/backups/archived_logs

db2 update db cfg using logfilsiz 5000 logprimary 50 logsecond 10 logarchmeth1 disk:/experiment/backups/archived_logs trackmod onDB20000I The UPDATE DATABASE CONFIGURATION command completed successfully.SQL1363W One or more of the parameters submitted for immediate modificationwere not changed dynamically. For these configuration parameters, the databasemust be shutdown and reactivated before the configuration parameter changesbecome effective.

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 3 / 26

Page 4: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

The db cfg TRACKMOD parameter specifies wether or not DB2 should keep track of modifications to the database so that database backups may detect which data pages must be taken into account for an incremental or delta backup and potentially be or not included in the backup itself. This parameter should be activated.

db2 connect resetdb2stopdb2start

The first connection trial will fail because we need a backup start point for the new configuration. For this experiment a backup to /dev/null will suffice.

db2 connect to sample

SQL1116N A connection to or activation of database "SAMPLE" cannot be made because of BACKUP PENDING. SQLSTATE=57019

db2 backup db sample to /dev/null

Backup successful. The timestamp for this backup image is : 20120718112431

db2 connect to sample

Database Connection Information

Database server = DB2/LINUX 9.7.6 SQL authorization ID = DB2INST1 Local database alias = SAMPLE

We can now start changing data and, thus, generating log files. To keep an easier control of our transactions we will create the following control table:

create table dropme(id bigint generated always as identity (start with 1 increment by 1 no maxvalue),fecha timestamp generated always FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP not null,literal varchar(50),numero integer default 0);

We have written a REXX program which code can be found here, with which we 'bomb' our control table with intensive inserts, updates and deletions. After some minutes the following log files can be seen:

alias l='ls -la'l /experiment/backups/archived_logs/db2inst1/SAMPLE/NODE0000/C0000000/total 24080drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 07:21 .drwxr-x--- 3 db2inst1 db2iadm1 4096 Jul 12 09:02 ..-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 10:46 S0000000.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 11:59 S0000001.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 13:55 S0000002.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 16:20 S0000003.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 19:41 S0000004.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 02:17 S0000005.LOG

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 4 / 26

First trial fails

Trial after initial backup succeeds

Page 5: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

Without stopping our 'bombing' scripts (even running them from several putty sessions simultaneously or as background processes) we perform an online backup:

time db2 BACKUP DATABASE SAMPLE ONLINE TO "/experiment/backups" WITH 2 BUFFERS BUFFER 1024 PARALLELISM 1 COMPRESS WITHOUT PROMPTING

Backup successful. The timestamp for this backup image is : 20120713080122

real 0m13.582suser 0m0.015ssys 0m0.039s

l /experiment/backupstotal 32836drwxr-xr-x 3 db2inst1 db2iadm1 4096 Jul 13 08:01 .drwxr-x--- 4 db2inst1 db2iadm1 4096 May 9 15:34 ..drwxr-xr-x 3 db2inst1 db2iadm1 4096 Jul 12 09:02 archived_logs-rw------- 1 db2inst1 db2iadm1 33574912 Jul 13 08:01 SAMPLE.0.db2inst1.NODE0000.CATN0000.20120713080122.001

l /experiment/backups/archived_logs/db2inst1/SAMPLE/NODE0000/C0000000/total 27340drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 08:01 .drwxr-x--- 3 db2inst1 db2iadm1 4096 Jul 12 09:02 ..-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 10:46 S0000000.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 11:59 S0000001.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 13:55 S0000002.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 16:20 S0000003.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 19:41 S0000004.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 02:17 S0000005.LOG-rw-r----- 1 db2inst1 db2iadm1 3334144 Jul 13 08:01 S0000006.LOG

dateFri Jul 13 08:02:14 CEST 2012

The database keeps on working. The scripts that were changing the data in the database didn't stop their activity but on the contrary, carried on without ever suspecting a backup was being performed at the same time, which means:

– At backup time, all the changes done so far were externalized to a new log file (i.e. S0000006.LOG, that's why it's somewhat smaller)

– changes taking place on the database right after the backup starts are being registered in the next log file at the same time the backup itself is progressing.

And all this is happening in a non disruptive fashion.

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 5 / 26

Page 6: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

If we repeat the backup command but this time using the INCLUDE LOGS clause, this is the result:

time db2 BACKUP DATABASE SAMPLE ONLINE TO "/experiment/backups" WITH 2 BUFFERS BUFFER 1024 PARALLELISM 1 COMPRESS INCLUDE LOGS WITHOUT PROMPTING

Backup successful. The timestamp for this backup image is : 20120713083940

real 0m10.952suser 0m0.018ssys 0m0.031s

l /experiment/backups/total 65660drwxr-xr-x 3 db2inst1 db2iadm1 4096 Jul 13 08:39 .drwxr-x--- 4 db2inst1 db2iadm1 4096 May 9 15:34 ..drwxr-xr-x 3 db2inst1 db2iadm1 4096 Jul 12 09:02 archived_logs-rw------- 1 db2inst1 db2iadm1 33574912 Jul 13 08:01 SAMPLE.0.db2inst1.NODE0000.CATN0000.20120713080122.001-rw------- 1 db2inst1 db2iadm1 33574912 Jul 13 08:39 SAMPLE.0.db2inst1.NODE0000.CATN0000.20120713083940.001[db2inst1@raul pjbr]$

l /experiment/backups/archived_logs/db2inst1/SAMPLE/NODE0000/C0000000/total 27672drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 08:39 .drwxr-x--- 3 db2inst1 db2iadm1 4096 Jul 12 09:02 ..-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 10:46 S0000000.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 11:59 S0000001.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 13:55 S0000002.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 16:20 S0000003.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 19:41 S0000004.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 02:17 S0000005.LOG-rw-r----- 1 db2inst1 db2iadm1 3334144 Jul 13 08:01 S0000006.LOG-rw-r----- 1 db2inst1 db2iadm1 335872 Jul 13 08:39 S0000007.LOG

We just generated a new full backup file, but since less time has elapsed and less changes have occurred, the truncated log file (S0000007.LOG) is even smaller in size than S0000006.LOG. By using INCLUDE LOGS, log files S0000006.LOG and S0000007.LOG are included in the backup.

To generate more log files faster and easier, we have created a del text file containing data like this:

datos.del,,hola,23,,hola,23,,hola,23,,hola,23... (50.000 identical lines.)

and we import this data into the dropme table with:

db2 import from datos.del of del insert into dropme

which generates some 2-3 complete log files on each import operation.

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 6 / 26

Page 7: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

If we repeat the backup now we will see:

time db2 BACKUP DATABASE SAMPLE ONLINE TO "/experiment/backups" WITH 2 BUFFERS BUFFER 1024 PARALLELISM 1 COMPRESS INCLUDE LOGS WITHOUT PROMPTING

Backup successful. The timestamp for this backup image is : 20120713094936

real 0m13.260suser 0m0.020ssys 0m0.031s

l /experiment/backups/total 98484drwxr-xr-x 3 db2inst1 db2iadm1 4096 Jul 13 09:49 .drwxr-x--- 4 db2inst1 db2iadm1 4096 May 9 15:34 ..drwxr-xr-x 3 db2inst1 db2iadm1 4096 Jul 12 09:02 archived_logs-rw------- 1 db2inst1 db2iadm1 33574912 Jul 13 08:01 SAMPLE.0.db2inst1.NODE0000.CATN0000.20120713080122.001-rw------- 1 db2inst1 db2iadm1 33574912 Jul 13 08:39 SAMPLE.0.db2inst1.NODE0000.CATN0000.20120713083940.001-rw------- 1 db2inst1 db2iadm1 33574912 Jul 13 09:49 SAMPLE.0.db2inst1.NODE0000.CATN0000.20120713094936.001

l /experiment/backups/archived_logs/db2inst1/SAMPLE/NODE0000/C0000000total 52012drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 09:49 .drwxr-x--- 3 db2inst1 db2iadm1 4096 Jul 12 09:02 ..-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 10:46 S0000000.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 11:59 S0000001.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 13:55 S0000002.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 16:20 S0000003.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 19:41 S0000004.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 02:17 S0000005.LOG-rw-r----- 1 db2inst1 db2iadm1 3334144 Jul 13 08:01 S0000006.LOG-rw-r----- 1 db2inst1 db2iadm1 335872 Jul 13 08:39 S0000007.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:31 S0000008.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:31 S0000009.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:31 S0000010.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:31 S0000011.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:47 S0000012.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:47 S0000013.LOG-rw-r----- 1 db2inst1 db2iadm1 270336 Jul 13 09:49 S0000014.LOG

(We have done a couple data imports at the same time the bombing scripts were doing their job uninterrupted and simultaneously all the time, that's why there are logfiles from 9'31 and 9'47)

The next thing to do is: Let's restore the database at the situation it was at 9'48, for example. To achieve this we have:

– One Full backup taken at 8'01 without logs included.– One Full backup taken at 8'39 WITH logs included. (We will use this one)– One Full backup taken at 9'49 WITH logs included.– The log files.

And this is what we'll do:

– Copy the log files to a different location to simulate a restore from tape device.

– Restore the full backup taken at 8'39– Rollforward the restored backup to 9'48

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 7 / 26

Page 8: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

mkdir /experiment/backups/pepemkdir /experiment/backups/auxcd /experiment/backups/archived_logs/db2inst1/SAMPLE/NODE0000/C0000000tar -cvzf logs.tar.gz *S0000000.LOGS0000001.LOGS0000002.LOGS0000003.LOGS0000004.LOGS0000005.LOGS0000006.LOGS0000007.LOGS0000008.LOGS0000009.LOGS0000010.LOGS0000011.LOGS0000012.LOGS0000013.LOGS0000014.LOG

mv logs.tar.gz /experiment/backups/pepe/cd /experiment/backups/pepe/tar -xvzf logs.tar.gzS0000000.LOGS0000001.LOGS0000002.LOGS0000003.LOGS0000004.LOGS0000005.LOGS0000006.LOGS0000007.LOGS0000008.LOGS0000009.LOGS0000010.LOGS0000011.LOGS0000012.LOGS0000013.LOGS0000014.LOGrm logs.tar.gzltotal 52012drwxr-xr-x 2 db2inst1 db2iadm1 4096 Jul 13 10:12 .drwxr-xr-x 4 db2inst1 db2iadm1 4096 Jul 13 10:09 ..-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 10:46 S0000000.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 11:59 S0000001.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 13:55 S0000002.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 16:20 S0000003.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 19:41 S0000004.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 02:17 S0000005.LOG-rw-r----- 1 db2inst1 db2iadm1 3334144 Jul 13 08:01 S0000006.LOG-rw-r----- 1 db2inst1 db2iadm1 335872 Jul 13 08:39 S0000007.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:31 S0000008.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:31 S0000009.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:31 S0000010.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:31 S0000011.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:47 S0000012.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:47 S0000013.LOG-rw-r----- 1 db2inst1 db2iadm1 270336 Jul 13 09:49 S0000014.LOG

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 8 / 26

Page 9: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

We restore then the database using the Full online backup from /experiment/backups. We can do this restore either with or without the logtarget clause since this option simply extracts the log files included in the full backup (via the INCLUDE LOGS option) to the directory we specify but these files, number 6 and 7 respectively, are already in the /experiment/backups/pepe directory.

db2 restore db sample from /experiment/backups taken at 20120713083940 logtarget "/experiment/backups/aux"SQL2539W Warning! Restoring to an existing database that is the same as thebackup image database. The database files will be deleted.Do you want to continue ? (y/n) yDB20000I The RESTORE DATABASE command completed successfully.

l /experiment/backups/auxtotal 3600drwxr-xr-x 2 db2inst1 db2iadm1 4096 Jul 13 12:19 .drwxr-xr-x 5 db2inst1 db2iadm1 4096 Jul 13 12:00 ..-rw------- 1 db2inst1 db2iadm1 3334144 Jul 13 12:19 S0000006.LOG-rw------- 1 db2inst1 db2iadm1 335872 Jul 13 12:19 S0000007.LOG

Now we roll forward to the desired Point In Time indicating DB2 where he must go search the log files needed. Since files S0000006.LOG and S0000007.LOG are both in /experiment/backups/aux as well as in /experiment/backups/pepe, we only need to indicate the last one, since it contains all the log files necessary to move until 9'48.

db2 "rollforward db sample to 2012-07-13-09.48.00 using local time and stop overflow log path (/experiment/backups/pepe)"

Rollforward Status

Input database alias = sample Number of nodes have returned status = 1

Node number = 0 Rollforward status = not pending Next log file to be read = Log files processed = S0000006.LOG - S0000014.LOG Last committed transaction = 2012-07-13-09.48.00.000000 Local

DB20000I The ROLLFORWARD command completed successfully.[db2inst1@raul aux]$

db2 connect to sample

Database Connection Information

Database server = DB2/LINUX 9.7.6 SQL authorization ID = DB2INST1 Local database alias = SAMPLE

db2 select 'max(fecha)' from dropme

1--------------------------2012-07-13-09.48.00.729814

1 record(s) selected.

Now, what would have happened in the case of S0000006.LOG and S0000007.LOG

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 9 / 26

Page 10: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

missing ? (that is, log files containing in-flight transactions while the backup was being taken)

Let's suppose these log files actually do not exist any more:

cd pepe[db2inst1@raul pepe]$ ltotal 52012drwxr-xr-x 2 db2inst1 db2iadm1 4096 Jul 13 10:12 .drwxr-xr-x 5 db2inst1 db2iadm1 4096 Jul 13 12:00 ..-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 10:46 S0000000.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 11:59 S0000001.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 13:55 S0000002.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 16:20 S0000003.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 19:41 S0000004.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 02:17 S0000005.LOG-rw-r----- 1 db2inst1 db2iadm1 3334144 Jul 13 08:01 S0000006.LOG-rw-r----- 1 db2inst1 db2iadm1 335872 Jul 13 08:39 S0000007.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:31 S0000008.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:31 S0000009.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:31 S0000010.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:31 S0000011.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:47 S0000012.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:47 S0000013.LOG-rw-r----- 1 db2inst1 db2iadm1 270336 Jul 13 09:49 S0000014.LOG

[db2inst1@raul pepe]$ rm S0000006.LOG S0000007.LOG[db2inst1@raul pepe]$ ltotal 48420drwxr-xr-x 2 db2inst1 db2iadm1 4096 Jul 13 12:33 .drwxr-xr-x 5 db2inst1 db2iadm1 4096 Jul 13 12:00 ..-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 10:46 S0000000.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 11:59 S0000001.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 13:55 S0000002.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 16:20 S0000003.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 19:41 S0000004.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 02:17 S0000005.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:31 S0000008.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:31 S0000009.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:31 S0000010.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:31 S0000011.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:47 S0000012.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:47 S0000013.LOG-rw-r----- 1 db2inst1 db2iadm1 270336 Jul 13 09:49 S0000014.LOG

Now we retry the database restore:

db2 restore db sample from /experiment/backups taken at 20120713083940SQL2539W Warning! Restoring to an existing database that is the same as thebackup image database. The database files will be deleted.Do you want to continue ? (y/n) yDB20000I The RESTORE DATABASE command completed successfully.

As expected, we cannot get connected to the database yet since it is in ROLLFORWARD PENDING state:

[db2inst1@raul pepe]$ db2 connect to sampleSQL1117N A connection to or activation of database "SAMPLE" cannot be madebecause of ROLL-FORWARD PENDING. SQLSTATE=57019[db2inst1@raul pepe]$ db2 "rollforward db sample to 2012-07-13-09.48.00 using local time and stop overflow log path (/experiment/backups/pepe)"

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 10 / 26

Page 11: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

Rollforward Status

Input database alias = sample Number of nodes have returned status = 1

Node number = 0 Rollforward status = not pending Next log file to be read = Log files processed = S0000006.LOG - S0000014.LOG Last committed transaction = 2012-07-13-09.48.00.000000 Local

DB20000I The ROLLFORWARD command completed successfully.

[db2inst1@raul aux]$ db2 connect to sample

Database Connection Information

Database server = DB2/LINUX 9.7.6 SQL authorization ID = DB2INST1 Local database alias = SAMPLE

[db2inst1@raul aux]$ db2 select 'max(fecha)' from dropme

1--------------------------2012-07-13-09.48.00.729814

1 record(s) selected.

One remarkable thing is the fact that files S0000006.LOG and S0000007.LOG do not exist anywhere since we had erased them too from /experiment/backups/aux

[db2inst1@raul aux]$ ltotal 8drwxr-xr-x 2 db2inst1 db2iadm1 4096 Jul 13 12:21 .drwxr-xr-x 5 db2inst1 db2iadm1 4096 Jul 13 12:00 ..[db2inst1@raul aux]$

With that said, we might suppose DB2 is looking for them on db cfg LOGARCHMETH1, which contains the path to the archived log files. Let's prove it by dropping log files 6 and 7 from this path:

db2 get db cfg grep -i logarchmeth1 First log archive method (LOGARCHMETH1) = DISK:/experiment/backups/archived_logs/

cd /experiment/backups/archived_logs/db2inst1/SAMPLE/NODE0000/C0000000rm S0000006.LOG S0000007.LOG

cd /experiment[db2inst1@raul experiment]$ find | grep -i S0000006.LOG[db2inst1@raul experiment]$ find | grep -i S0000007.LOG[db2inst1@raul experiment]$ find | grep -i S0000008.LOG./backups/pepe/S0000008.LOG./backups/archived_logs/db2inst1/SAMPLE/NODE0000/C0000000/S0000008.LOG[db2inst1@raul experiment]$

As we can see, files S0000006.LOG and S0000007.LOG do not exist but S0000008.LOG does exist, for example. I am doing this to prove my search method is valid and these two files simply don't exist anywhere in the required path.

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 11 / 26

Page 12: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

We will now retry the restore to see what happens:

db2 restore db sample from /experiment/backups taken at 20120713083940SQL2539W Warning! Restoring to an existing database that is the same as thebackup image database. The database files will be deleted.Do you want to continue ? (y/n) yDB20000I The RESTORE DATABASE command completed successfully.

db2 "rollforward db sample to 2012-07-13-09.48.00 using local time and stop overflow log path (/experiment/backups/pepe)"SQL4970N Roll-forward recovery on database "SAMPLE" cannot reach thespecified stop point (end-of-log or point-in-time) on database partition(s)"0". Roll-forward recovery processing has halted on log file "S0000006.LOG".[db2inst1@raul experiment]$

So we finally achieved what we wanted to prove: that the rollforward process will fail because the database cannot find the required .LOG files. Now how shall we fix it? Certainly by re-generating files S0000006.LOG and S0000007.LOG.

db2 restore db sample from /experiment/backups taken at 20120713083940 logtarget /experiment/backups/auxSQL2539W Warning! Restoring to an existing database that is the same as thebackup image database. The database files will be deleted.Do you want to continue ? (y/n) yDB20000I The RESTORE DATABASE command completed successfully.

l /experiment/backups/auxtotal 3600drwxr-xr-x 2 db2inst1 db2iadm1 4096 Jul 13 12:52 .drwxr-xr-x 5 db2inst1 db2iadm1 4096 Jul 13 12:00 ..-rw------- 1 db2inst1 db2iadm1 3334144 Jul 13 12:52 S0000006.LOG-rw------- 1 db2inst1 db2iadm1 335872 Jul 13 12:52 S0000007.LOG

db2 "rollforward db sample to 2012-07-13-09.48.00 using local time and stop overflow log path (/experiment/backups/aux)"

Rollforward Status

Input database alias = sample Number of nodes have returned status = 1

Node number = 0 Rollforward status = not pending Next log file to be read = Log files processed = S0000006.LOG - S0000014.LOG Last committed transaction = 2012-07-13-09.48.00.000000 Local

DB20000I The ROLLFORWARD command completed successfully.[db2inst1@raul aux]$

And that's it. Now, with files S0000006 and S0000007 from the overflow log path added to files under folder C0000000, we can successfully finish the rollforward process (note we only have files 6 and 7 on the aux directory but the restore process will use 6 through 14, i.e. Files 8 and higher will be searched on the C0000000 directory).

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 12 / 26

Page 13: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

One curious thing: for some reason file S0000006.LOG disappears once the rollforward is done.

[db2inst1@raul aux]$ ltotal 340drwxr-xr-x 2 db2inst1 db2iadm1 4096 Jul 13 13:10 .drwxr-xr-x 5 db2inst1 db2iadm1 4096 Jul 13 12:00 ..-rw------- 1 db2inst1 db2iadm1 335872 Jul 13 13:10 S0000007.LOG

We will now completely drop the directory:

/experiment/backups/archived_logs/db2inst1/SAMPLE/NODE0000/C0000000

to make sure that only the log files we have restored from tape are available:

rm -rf /experiment/backups/archived_logs/db2inst1/SAMPLE/NODE0000/C0000000rm /experiment/backups/aux/*

db2 restore db sample from /experiment/backups taken at 20120713083940 logtarget /experiment/backups/auxSQL2539W Warning! Restoring to an existing database that is the same as thebackup image database. The database files will be deleted.Do you want to continue ? (y/n) yDB20000I The RESTORE DATABASE command completed successfully.

mv /experiment/backups/aux/S000000*.LOG /experiment/backups/pepe

db2 "rollforward db sample to 2012-07-13-09.48.00 using local time and stop overflow log path (/experiment/backups/pepe)"

Rollforward Status

Input database alias = sample Number of nodes have returned status = 1

Node number = 0 Rollforward status = not pending Next log file to be read = Log files processed = S0000006.LOG - S0000014.LOG Last committed transaction = 2012-07-13-09.48.00.000000 Local

DB20000I The ROLLFORWARD command completed successfully.

Since the log files are no longer on directory C0000000 the location on which all the log files required for the rollforward process are must be specified on DB CFG parm overflow log path. Once the rollforward is done, there remains our information untouched:

db2 connect to sample

Database Connection Information

Database server = DB2/LINUX 9.7.6 SQL authorization ID = DB2INST1 Local database alias = SAMPLE

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 13 / 26

Page 14: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

db2 select 'max(fecha)' from dropme

1--------------------------2012-07-13-09.48.00.729814

1 record(s) selected.

which demonstrates directory C0000000 is not really necessary on a restore process. It should suffice backing it up everyday using the tar command and later erase its content (but not the directory itself).

Another important consideration: Everytime we RESTORE a database, DB2 creates another directory generically named C000000n that will become the new archived log files directory from now on.

[db2inst1@raul aux]$ cd ../archived_logs/db2inst1/SAMPLE/NODE0000/[db2inst1@raul NODE0000]$ ltotal 36drwxr-x--- 9 db2inst1 db2iadm1 4096 Jul 13 13:16 .drwxr-x--- 3 db2inst1 db2iadm1 4096 Jul 12 09:02 ..drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 12:19 C0000001drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 12:22 C0000002drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 12:26 C0000003drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 12:37 C0000004drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 12:53 C0000005drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 13:10 C0000006drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 13:16 C0000007

in such a way that the next log file to be created won't be created on just any other directory but in the most recent one:

db2 get db cfg | grep -i log ... Overflow log path (OVERFLOWLOGPATH) = Mirror log path (MIRRORLOGPATH) = First active log file = S0000015.LOG Block log on disk full (BLK_LOG_DSK_FUL) = NO Block non logged operations (BLOCKNONLOGGED) = NO Percent max primary log space by transaction (MAX_LOG) = 0 ...

Note: DB CFG parm OVERFLOWLOGPATH prevents us from having the need to specify on every rollforward operation where the required log files are located. We'll simply leave them on the directory specified by this parameter and ever since the clause overflow log path on the rollforward command won't be necessary any longer.

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 14 / 26

Page 15: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

If we now create the next archived log, it will be created on C0000007:

db2 import from datos.del of del insert into dropmeSQL3109N The utility is beginning to load data from file "datos.del".SQL3110N The utility has completed processing. "50041" rows were read fromthe input file.SQL3221W ...Begin COMMIT WORK. Input Record Count = "50041".SQL3222W ...COMMIT of any database changes was successful.SQL3149N "50041" rows were processed from the input file. "50041" rows weresuccessfully inserted into the table. "0" rows were rejected.

Number of rows read = 50041Number of rows skipped = 0Number of rows inserted = 50041Number of rows updated = 0Number of rows rejected = 0Number of rows committed = 50041

cd ../archived_logs/db2inst1/SAMPLE/NODE0000/[db2inst1@raul NODE0000]$ ltotal 36drwxr-x--- 9 db2inst1 db2iadm1 4096 Jul 13 13:16 .drwxr-x--- 3 db2inst1 db2iadm1 4096 Jul 12 09:02 ..drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 12:19 C0000001drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 12:22 C0000002drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 12:26 C0000003drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 12:37 C0000004drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 12:53 C0000005drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 13:10 C0000006drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 13:24 C0000007

[db2inst1@raul NODE0000]$ cd C0000007[db2inst1@raul C0000007]$ ltotal 4020drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 13:24 .drwxr-x--- 9 db2inst1 db2iadm1 4096 Jul 13 13:16 ..-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 13:24 S0000015.LOG

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 15 / 26

Directories created

after each Restore

Page 16: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

Recovering dropped tablesWe will now take a further step: When we configure a linear log strategy (instead of circular) there are several interesting functional benefits we may achieve, among which we have the possibility to recover tables dropped accidentally or not.

Let's suppose we dropped a table yesterday at 18'00h and today we want to restore the database including our dropped table. From a classical point of view, we might restore the database to a point in time of, say, 17'59h losing all the transactions comitted ever since (and thus, having to remake them with all it implies).

For such cases, however, there exists one function provided by the database history file since it keeps track of the activity -not only about backups and restores but also about tables that have been dropped- and will allow us to restore the database to the current situation plus to create and populate the table as it was prior to being dropped. Let's see how:

For this scenario we will have a control table containing the timestamp of every record inserted and a new table that will be our victim to illustrate the experiment.

• We will create a victim table (victima)• We will load records in the control table (dropme). We will also generate

more log files. This is not strictly necessary but we have done it like this for having a logs-dropping-logs situation.

• We shall drop the victim table.• We will load some more records in the control table (dropme), which will

generate more log files.• We shall restore the database to the current, most recent situation. • The victim table shouldn't exist.• We will now restore the database with the RECOVER DROPPED TABLE clause at

ROLLFORWARD time.

Let's sail out!

Just as we said, we create our victim table and insert some few values in it:

db2 'create table victima (c1 integer)'db2 'insert into victima values (1),(2),(3),(4)'db2 select '*' from victima

C1----------- 1 2 3 4

4 record(s) selected.

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 16 / 26

Page 17: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

We will now load some more rows in our control table dropme and, obviously, we will generate some log files as a result of it.

db2 import from datos.del of del insert into dropme

We can see one new log file has been generated:

l /experiment/backups/archived_logs/db2inst1/SAMPLE/NODE0000/total 40drwxr-x--- 10 db2inst1 db2iadm1 4096 Jul 13 13:54 .drwxr-x--- 3 db2inst1 db2iadm1 4096 Jul 12 09:02 ..drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 12:19 C0000001drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 12:22 C0000002drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 12:26 C0000003drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 12:37 C0000004drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 12:53 C0000005drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 13:10 C0000006drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 13 13:52 C0000007drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 18 08:32 C0000008

l /experiment/backups/archived_logs/db2inst1/SAMPLE/NODE0000/C0000008/total 4020drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 18 08:32 .drwxr-x--- 10 db2inst1 db2iadm1 4096 Jul 13 13:54 ..-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 18 08:32 S0000008.LOG

We drop our victim table:

db2 drop table victima

and we load some more data to generate even more log files:

db2 import from datos.del of del insert into dropme

l /experiment/backups/archived_logs/db2inst1/SAMPLE/NODE0000/C0000008/total 8032drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 18 09:00 .drwxr-x--- 10 db2inst1 db2iadm1 4096 Jul 13 13:54 ..-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 18 08:32 S0000008.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 18 09:00 S0000009.LOG

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 17 / 26

Page 18: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

Let's query now the database history file for information regarding dropped tables:

db2 list history dropped table all for db sample

List History File for sample

Number of matching file entries = 1

Op Obj Timestamp+Sequence Type Dev Earliest Log Current Log Backup ID -- --- ------------------ ---- --- ------------ ------------ -------------- D T 20120718083532 000000000500a0e300030007 ---------------------------------------------------------------------------- "DB2INST1"."VICTIMA" resides in 1 tablespace(s):

00001 IBMDB2SAMPLEREL ---------------------------------------------------------------------------- Comment: DROP TABLE Start Time: 20120718083532 End Time: 20120718083532 Status: A ---------------------------------------------------------------------------- EID: 68

DDL: CREATE TABLE "DB2INST1"."VICTIMA" ( "C1" INTEGER ) IN "IBMDB2SAMPLEREL" ; ----------------------------------------------------------------------------

As we can see, it shows info about the timestamp at which the table was dropped as well as the DDL sentence we should execute to recreate the empty table (no data, only the structure).

Let's see now which is the last row on table dropme:

db2 select 'max(fecha) from dropme'

1--------------------------2012-07-18-09.00.48.213576

1 record(s) selected.

We shall now terminate our database to restore the online full backup we had plus the archived log files (NOTE: WHEN CLOSING THE DB, AN ADDITIONAL LOG FILE S0000010.LOG WILL BE CREATED)

db2 restore db sample from /experiment/backups taken at 20120713083940 logtarget /experiment/backups/auxSQL2539W Warning! Restoring to an existing database that is the same as thebackup image database. The database files will be deleted.Do you want to continue ? (y/n) yDB20000I The RESTORE DATABASE command completed successfully.

This command restores the database and creates log files S0000006.LOG and S0000007.LOG in the aux directory. We must copy them to path /experiment/backups/pepe but the rollforward to end of logs will fail. Why?

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 18 / 26

Page 19: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

cd aux[db2inst1@raul aux]$ ltotal 3600drwxr-xr-x 2 db2inst1 db2iadm1 4096 Jul 18 09:20 .drwxr-xr-x 5 db2inst1 db2iadm1 4096 Jul 13 12:00 ..-rw------- 1 db2inst1 db2iadm1 3334144 Jul 18 09:20 S0000006.LOG-rw------- 1 db2inst1 db2iadm1 335872 Jul 18 09:20 S0000007.LOG[db2inst1@raul aux]$ mv * ../pepe

db2 "rollforward db sample to end of logs and stop overflow log path (/experiment/backups/pepe)"SQL1265N The archive log file "S0000009.LOG" is not associated with thecurrent log sequence for database "SAMPLE" on node "0".[db2inst1@raul backups]$

Because (let's recall when we added more rows we also generated log files S0000008.LOG and S0000009.LOG) we need now files 8 and 9 in order to rollforward the database to end of logs.

So we must copy these two files to path /experiment/backups/pepe and repeat the ROLLFORWARD command.

cd /experiment/backups/archived_logs/db2inst1/SAMPLE/NODE0000/C0000008/cp * /experiment/backups/pepel /experiment/backups/pepetotal 45680drwxr-xr-x 2 db2inst1 db2iadm1 4096 Jul 18 09:29 .drwxr-xr-x 5 db2inst1 db2iadm1 4096 Jul 13 12:00 ..-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 10:46 S0000000.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 11:59 S0000001.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 13:55 S0000002.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 16:20 S0000003.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 12 19:41 S0000004.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 02:17 S0000005.LOG-rw------- 1 db2inst1 db2iadm1 335872 Jul 18 09:20 S0000007.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 18 09:36 S0000008.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 18 09:36 S0000009.LOG-rw-r----- 1 db2inst1 db2iadm1 958464 Jul 18 09:36 S0000010.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:31 S0000011.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:47 S0000012.LOG-rw-r----- 1 db2inst1 db2iadm1 4104192 Jul 13 09:47 S0000013.LOG-rw-r----- 1 db2inst1 db2iadm1 270336 Jul 13 09:49 S0000014.LOG

db2 "rollforward db sample to end of logs and stop overflow log path (/experiment/backups/pepe)" Rollforward Status

Input database alias = sample Number of nodes have returned status = 1

Node number = 0 Rollforward status = not pending Next log file to be read = Log files processed = S0000006.LOG - S0000010.LOG Last committed transaction = 2012-07-18-07.14.21.000000 UTC

DB20000I The ROLLFORWARD command completed successfully.

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 19 / 26

NOTICE: UTC time corresponds

to CEST -2(corresponds to local

time 9'14)

Page 20: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

db2 connect to sampledb2 select 'max(fecha) from dropme'

1--------------------------2012-07-18-09.00.48.213576

1 record(s) selected.

And here comes an important hit: our victim table doesn't exist (as expected, on the other hand)

db2 select '*' from victimaSQL0204N "DB2INST1.VICTIMA" is an undefined name. SQLSTATE=42704

We will now repeat the process BUT this time we will use the RECOVER DROPPED TABLE clause when rolling forward.

db2 restore db sample from /experiment/backups taken at 20120713083940 logtarget /experiment/backups/auxSQL2539W Warning! Restoring to an existing database that is the same as thebackup image database. The database files will be deleted.Do you want to continue ? (y/n) yDB20000I The RESTORE DATABASE command completed successfully.

mv /experiment/backups/aux/S000000* ../pepe

db2 "rollforward db sample to end of logs and stop overflow log path (/experiment/backups/pepe) recover dropped table 000000000500a0e300030007 to /experiment/backups/aux"

VERY IMPORTANT NOTICE: Do not eliminate left hand zeroes or the operation will fail. The backup ID of the table must be specified exactly as it appears provided by the LIST HISTORY DROPPED TABLE command

Rollforward Status

Input database alias = sample Number of nodes have returned status = 1

Node number = 0 Rollforward status = not pending Next log file to be read = Log files processed = S0000006.LOG - S0000010.LOG Last committed transaction = 2012-07-18-07.14.21.000000 UTC

DB20000I The ROLLFORWARD command completed successfully.

The victim table still doesn't exist !!

db2 connect to sampledb2 select '*' from victimaSQL0204N "DB2INST1.VICTIMA" is an undefined name. SQLSTATE=42704

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 20 / 26

Page 21: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

BUT this time, when rolling forward with RECOVER DROPPED TABLE the following has been created on path /experiment/backups/aux

[... aux]$ ltotal 12drwxr-xr-x 3 db2inst1 db2iadm1 4096 Jul 18 10:16 .drwxr-xr-x 5 db2inst1 db2iadm1 4096 Jul 13 12:00 ..drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 18 10:16 NODE0000

and under NODE0000 there is a file:

l NODE0000/total 12drwxr-x--- 2 db2inst1 db2iadm1 4096 Jul 18 10:16 .drwxr-xr-x 3 db2inst1 db2iadm1 4096 Jul 18 10:16 ..-rw-r----- 1 db2inst1 db2iadm1 8 Jul 18 10:16 data

It is a simple, plain ASCII text file whose content we may view/edit:

file NODE0000/dataNODE0000/data: ASCII text

cat NODE0000/data1234

At this point we have all the ingredients required to create the table just as it was at its removal time:

db2 'CREATE TABLE "DB2INST1"."VICTIMA" ( "C1" INTEGER )' DB20000I The SQL command completed successfully.

[... aux]$ db2 import from NODE0000/data of del insert into victimaSQL3109N The utility is beginning to load data from file "NODE0000/data".

SQL3110N The utility has completed processing. "4" rows were read from theinput file.

SQL3221W ...Begin COMMIT WORK. Input Record Count = "4".SQL3222W ...COMMIT of any database changes was successful.

SQL3149N "4" rows were processed from the input file. "4" rows weresuccessfully inserted into the table. "0" rows were rejected.

Number of rows read = 4Number of rows skipped = 0Number of rows inserted = 4Number of rows updated = 0Number of rows rejected = 0Number of rows committed = 4

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 21 / 26

In this case, using tbsp IBMDB2SAMPLEREL is optional.

Page 22: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

db2 select '*' from victima

C1----------- 1 2 3 4

4 record(s) selected.

To take into account • For a table to be recoverable, the tablespace it resides in must have the

option DROPPED TABLE RECOVERY ON enabled. This can be done at the tablespace creation time (by default this option is ON) but also through the ALTER TABLESPACE command.

• Only REGULAR tablespaces are recoverable. To check if a tablespace is recoverable, the following query on column DROP_RECOVERY from the catalog view SYSCAT.TABLESPACES will suffice:

db2 select tbspace from syscat.tables where "tabname='VICTIMA'"

TBSPACE ---------------------IBMDB2SAMPLEREL

1 record(s) selected.

db2 select drop_recovery from syscat.tablespaces where "tbspace='IBMDB2SAMPLEREL'"

DROP_RECOVERY-------------Y

1 record(s) selected.

• If the table was in REORG PENDING state at the time it was dropped (e.g. because we added, altered or deleted a column) it might be necessary to modify the CREATE TABLE command so that the DDL matches the data of the data file.

• Indexes won't be recovered as the result of a table recovery operation. We will need to recreate them, so keeping their DDL definitions well kept and centralized somewhere in a safe place would be a good thing to do.

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 22 / 26

Voilà, here we have our victim table as if it had never been dropped.

Page 23: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

Summary – We may save to cartridge and delete every file located on the current

C000000n directory on a daily basis. An alternative is to compress all the files into a tar file and FTP it to a backup server or similar using a descriptive name for an easier later recovery when Time calls for it.

– As more log files are created, they will be created in the current C000000n directory that should (this is an advice) be saved on a daily-weekly-monthly basis (tar -cvzf command) and erased afterwards using ascending numbers and timestamps.

– It might be adviceable (this must be evaluated in terms of size and duration) to include the log files as part of the full backups so that every in-flight transaction be preserved while a full backup is being taken, i.e. the Full Backup taken with the INCLUDE LOGS clause is commpletely restoreable into a consistent situation to the point in time when it was finished since it contains all the required log files in itself, as we demonstrate next:

db2 restore db sample from /experiment/backups taken at 20120713083940 logtarget /experiment/backups/auxSQL2539W Warning! Restoring to an existing database that is the same as thebackup image database. The database files will be deleted.Do you want to continue ? (y/n) yDB20000I The RESTORE DATABASE command completed successfully.

db2 "rollforward db sample to end of logs and stop overflow log path (/experiment/backups/aux)"

Rollforward Status

Input database alias = sample Number of nodes have returned status = 1

Node number = 0 Rollforward status = not pending Next log file to be read = Log files processed = S0000006.LOG - S0000007.LOG Last committed transaction = 2012-07-13-06.39.50.000000 UTC

DB20000I The ROLLFORWARD command completed successfully.

– When performing a database restore, there will have to search, locate, copy and decompress all the required log files in order to get to a certain point-in-time (PIT) after restoring a full backup and once this is done, execute the restore in two steps:

1. Database restore, as such, extracting the included log files if the INCLUDE LOGS clause had been used during the backup.

2. Rollforward the database to a certain PIT, having copied the previously extracted log files to the directory where the rest of the required log files reside so that we can apply them by pointing to this directory with the overflow log path clause.

– Once the restore is done, a new C000000n directory will be created in the path where DB2 stores its log files. Hence, any other directory pre-existing is no longer necessary and can be erased.

– WARNING: A database backup process is usually a huge workload to add to the server on which the database runs, specially in terms of CPU and I/O.

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 23 / 26

Page 24: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

Including logs inside the backups might increase the overhead in an already CPU and/or I/O constrained system with the additional side effect of a likely longer completion time and huger sized backup files. This should be carefully considered.

– Backup resource consumption can be controlled (at the expense of taking longer or shorter to complete) through the SET UTIL_IMPACT_PRIORITY command.

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 24 / 26

End Of Document

Page 25: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

APPENDIX A - bombardea.rexx source code/***************************************************************************//* Program that keeps the dropme table being updated, deleted and inserted.*//* This generates concurrency and log files. *//* *//* - Requires to install the REXX language *//* - It is more fun to execute it from several open putty windows. *//* - Doesn't generate extremely big tables since it sooner or later dele- *//* tes records previously created (REXX RANDOM function is not so random */ /* after all) *//***************************************************************************/'clear'alfabeto='abcdefg hijklm nopqr stuv wxyz -_'comilla="'"call DECLARA_ACCIONcall SELECT_CAMPOScall CLAUSULA_WHEREcall CLAUSULA_ORDER'db2 connect to sample'do 20000 interpret 'comando=accion.'random(1,4) select when comando='SELECT ' then do interpret "comando=comando||c."random(1,8)||" from dropme " interpret "comando=comando||w."random(1,4) interpret "comando=comando||o."random(1,5) end when comando='INSERT ' then do call GENERA_CADENA comando=comando||"into dropme(literal,numero) values(" interpret "comando=comando||"'cadena'"||','"random(1,99999)"||')'" end when comando='DELETE ' then do comando=comando||"from dropme " interpret "comando=comando||w."random(1,4) end when comando='UPDATE ' then do call GENERA_CADENA call UPDATE_CAMPOS comando=comando||"dropme set " interpret "comando=comando||u."random(1,2) end otherwise nop end 'db2 "'comando'"'endreturn

DECLARA_ACCION:accion.1='SELECT 'accion.2='INSERT 'accion.3='UPDATE 'accion.4='DELETE 'return

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 25 / 26

Page 26: DB2 9.7 LUW Full Online Backups - Experiments and Considerations

SELECT_CAMPOS:c.1='id'c.2='fecha'c.3='literal'c.4='numero'c.5='*'c.6='count(*)'c.7='substr(literal,1,10)'c.8='max(numero)'return

CLAUSULA_WHERE:interpret "w.1 = ' '|| 'where id = '"random(1,99999)||' ' w.2 = " where fecha = current date " w.3 = " where length(rtrim(ltrim(literal)))=20 " w.4 = " where numero between 1000 and 10000 "return

CLAUSULA_ORDER:o.1=' order by id 'o.2=' order by fecha 'o.3=' order by literal 'o.4=' order by numero 'o.5=' order by id desc'o.6=' order by fecha desc 'o.7=' order by literal desc 'o.8=' order by numero desc 'o.9=' order by substr(literal,5,20) 'return

GENERA_CADENA:numcar=random(1,50)cadena=''do i=1 to numcar cadena=cadena||substr(alfabeto,random(1,length(alfabeto)),1)endcadena=comilla||cadena||comillareturn

UPDATE_CAMPOS:u.1="literal ="cadenau.2="numero ="||random(1,99999)return

This program can be executed with the command: rexx bombardea.rexx

DB2 9.7 LUW Full Online Backups - Experiments and Considerations 26 / 26