Rebuilding Oracle indexes is nothing new, but there are contrasting ideas about when this action should be taken. Some argue that there is usually no real benefit, and others set automatic jobs to rebuild indexes regularly as a rule.
I think that each case should be evaluated on its merits. Without going into it (the topic is well covered elsewhere) some indexes naturally tend towards 75% capacity and others >90%. Index capacities well under 75% are usually due to deletions. The only general rule I’d suggest is that when data is deleted, especially by a few large operations, then the related indexes might be worth checking to see if a reorganisation is warranted.
Compact (dense) indexes mean less disk space, less memory for caching the same information, less I/O and less processing (CPU time).
Rebuilding, coalescing (Enterprise Edition) or shrinking space (10g+) can be big and easy wins. There are many cases I could describe, but here is the most extreme example.
Purging Performance Problem
An organisation approached me after they been trying to purge years worth of data from an Oracle 9i database, but found that the performance was atrocious, and they were after that elusive go fast switch.
The application had a purging function built in, (rare but so beneficial!), so the code couldn’t be adjusted. There were no application design changes possible, eg partition by time and efficiently drop partitions instead of deleting.
The application purged data by asking for a period to retain and one to purge, then looping – deleting more recent data in the first iteration followed by incrementally older data in the next iterations. Here is a simplified example of what I believe it was doing from the description given to me:
delete where data_date between :retention_date-1 and :retention_date; commit; delete where data_date between :retention_date-2 and :retention_date; commit; delete where data_date between :retention_date-3 and :retention_date; commit; ....
This was probably designed to reduce undo space requirements.
The data_date key was indexed, which avoided tablescans. Oracle would handle the first deletion by entering the index at retention_date – N and then (range) scanning the index until it found the retention date, deleting as it went.
The problem is obvious to those who understand how Oracle indexes work. The deleted index entries result in more and more empty leaf blocks between the data yet to be purged and the retention date. These empty leaf blocks are not taken out of the index structure until they are reused elsewhere (by an insert or update). The index range scan had to access more and more of these empty leaf blocks as the purging process continued, requiring more processing time for each subsequent delete.
The solution I suggested was to punctuate the purge cycles with manual index coalesce commands to remove the empty leaf blocks. The first coalesce of the relevant index took several minutes, but took less time after that. This is the feedback we received when the index coalesce was added to the purging process:
“Wow……..what previously took 1 hour now takes about 25s. Yes, I can now purge a day off the table in about 25s. This is great. A whole week in 1 minute. and a month in 10 minutes.”
In this case, it wasn’t even necessary to access the database to find the solution. When one does have access to the database, evidence that the index should be rebuilt / shrunk / coalesced can come from:
- a high logical read count for a SQL statement, and
- a high logical read count for an index segment, and
- a low density shown by the results of analyze index .. validate structure, or
- a low density shown by the results of the segment advisor (10g), or
- a low estimated density from the key length x number of keys versus actual index size, or
- a tree dump of the index showing many empty (or mostly empty) leaf blocks in the structure