Database re-organisations exist only as fond and distant memories of old Oracle DBAs.  Well… almost.  I was asked to purge, partition and reclaim space from audit tables that had grown to be 100s of gigabytes each.  The real problem was that they contained LOBs (BasicFiles, not SecureFiles).  This post lists a few of the tweaks and tricks I used to bring the ancient ritual of the reorg into the modern age (11gR2).

In a nutshell, we wanted to move a schema from its current tablespace into a new one, leaving behind any data older than 12 months, and partitioning the largest objects at the same time.

  • The UFS filesystems, Solaris OS and Oracle database were configured to allow concurrent direct 1MB I/Os (most efficient for our environment).
  • The destination tablespaces were new and empty, so they could be configured without flashback or redo logging.  This dramatically reduced overhead when copying data into the tablespace, while allowing us to use a guaranteed restore point to flashback the rest of the database if the reorg failed.  (The new tablespaces would have to be dropped before flashing back the database, but they were only populated during the reorg exercise, and the old tablespaces would be retained until after the reorg had completed, so it doesn’t matter that we couldn’t recover the new tablespaces).
  • The destination tablespaces was created with evenly sized and pre-allocated datafiles.  The number of datafiles was a multiple of the degree of parallelism used for the compressed backup off the tablespace.  (See next point).  The pre-allocation avoids auto-extension delays during the reorg, and the size and number of datafiles ensures the load will be shared evenly between the parallel backup workers.
  • After the reorg, redo logging was re-enabled for the objects in the new tablespaces, then a parallel RMAN backup of the new tablespaces was taken with low compression (fastest in this environment).  This required less capacity and time than logging full redo during the re-org.  It made a recovery possible if needed between the reorg and the next full database backup.
  • The tables to be partitioned were pre-created with temporary names, parallel degrees, and nologging attributes for the LOBS.
  • Data was transferred with parallel direct inserts or alter table move commands.
  • Tracking of unrecoverable actions in the datafiles was disabled using event 10359.  (the DB_UNRECOVERABLE_SCN_TRACKING parameter isn’t dynamic until  This prevents the bottleneck which would form around access to the controlfile when relocating data in nologging mode.
  • Direct writes were increased to 1MB with event 10351, eg:
    alter system set events ‘10359 trace name context forever,level 1:10351 trace name context forever, level 128’;
  • Indexes were created in parallel using large manual work areas, nologging and non-sparse temp files.
  • Constraints were created with ENABLE NOVALIDATE then altered to ENABLE VALIDATE to complete in seconds.  (A big time saving).

LOBs are Anchors

The biggest obstacle to moving the data quickly was that the largest tables used LOBs (basicfiles).  How ironic that the datatype invented for holding large amounts of data is also the most awkward to manage.

The large amount of LOB data negated any benefit from using datapump in parallel.  (See bug 5943346: PRODUCT ENHANCEMENT: PARALLELISM OF DATAPUMP JOB ON TABLE WITH LOB COLUMN).

Luckily, most of the LOB data was small enough to be stored in-row, (so it would be handled like other datatypes), but some was still stored in LOB segments.  Having data in LOB segments meant a lot of single block I/Os would be required during the reorg and that parallel inserts would be limited to one PQ process per destination partition.  (No intra-partition parallel support for LOBs).

It would have been much worse without disabling the unrecoverable scn tracking mentioned above.  Enabling caching of the LOBs would allow some multi-block I/O, but it would have increased redo and undo generation, so would not be a useful option.


2 thoughts on “Modern Age Database Reorganisation vs LOBs

    1. Hi Fabio,
      In the case I described, I moved the data of large tables with LOBs a few at a time in different sessions because the system could handle the CPU load and the IOPS required for all the single block LOB accesses.
      I used direct path inserts because of speed and archive/backup capacity constraints, so concurrent insert sessions wouldn’t be possible. If those constraints didn’t exist, then perhaps testing the speed of concurrent conventional inserts (as you did in your blog example with imp) would be worth a go, especially if the LOB data was mostly stored out-of-row. Whichever way you try to work around it, the presence of the LOBs causes problems when trying to move a lot of data.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s