This page is for my favourite tweaks to improve Oracle database performance.
(Work in progress – I’ll keep adding items to this page as I think of them).
Nologging operations can be slowed down by having to access the control file to record unrecoverable actions affecting each datafile, especially with LOBs and/or concurrent/parallel processing.
This overhead can be disabled (eg during a reorganisation) with:
This is not truly dynamic until 22.214.171.124. (Bug 12360160).
Until then, toggle control file tracking of nologging operations with these commands:
alter system set events '10359 trace name context forever, level 1'; alter system set events '10359 trace name context off';
The automatic workarea is not ideal for administration or batch jobs, so disable it and manually size sort and hash areas, eg:
alter session set workarea_size_policy=manual; alter session set hash_area_size=209715200; alter session set sort_area_size=209715200; alter session set sort_area_retained_size=209715200;
If a database is using an I/O sub-system tuned for 1MB I/Os, then allow the automatic workarea to use 1MB I/Os, eg:
alter system set "_smm_auto_max_io_size"=1024 ; alter system set "_smm_auto_min_io_size"=64 ; -- increment higher until multi-pass sorts happen
Database Writer and Checkpoints
If a database is using an I/O sub-system tuned for 1MB I/Os, (especially with cached BasicFile LOBs), then set the maximum DBWR I/O size, eg:
alter system set "_db_writer_coalesce_write_limit"=1048576;
During periods of heavy activity affecting many different blocks, (eg maintenance activities), I find it helpful to keep the database writer active by setting either fast_start_mttr_target (Ent Ed) or log_checkpoint_interval (Std Ed). See my post on checkpoint tuning.
To encourage direct reads for a session’s serial full scans (most useful in 10g and environments that use asynch I/O).
-- Direct alter session set "_serial_direct_read"=true; alter session set "_small_table_threshold"=1; -- Cached alter session set "_serial_direct_read"=false; alter session set "_small_table_threshold"=999999999;
Direct writes (eg during direct inserts) may be smaller than the optimum size, so set the direct I/O slot size using event 10351, e.g. 1MB (128x8k):
alter session set events '10351 trace name context forever, level 128';
DBMS_STATS disable small object optimization.
Useful for gathering statistics on bitmap indexes in parallel where there are hundreds or thousands of tiny partitions. Oracle may decide to run serially instead, which can take a lot longer.
alter session set events '38028 trace name context forever';
For servers with a high number of virtual CPUs, set CPU_COUNT to the number of cores and
PARALLEL_THREADS_PER_CPU to the number of threads per core. There is conflicting advice about this on MOS, but in my experience CPU_COUNT should be set manually for these CPUs. (Applicable to at least 126.96.36.199).
A no-extra-cost filesystem offering concurrent direct I/O, and LWP-based simulated asynchronous I/O. When the system is configured for large (usually 1MB) I/Os and an effort is made to avoid a couple of gotchas, the performance will be very good.
Mount filesystems dedicated to live database files with noatime.
Use mount option forcedirectio and/or filesystemio_options=setall
Set kernel parameters maxphys and md_maxphys, and confirm that each filesystem has maxcontig set to match, eg:
set maxphys=1048576 set md:md_maxphys=1048576
echo "maxphys/W 0t1048576" | mdb -kw echo "md_maxphys/W 0t1048576" | mdb -kw
Set and check maxcontig
tunefs -a 128 device fstyp -v device
Reduce UFS fragmentation by creating/restoring one datafile at a time per filesystem and pre-allocating space in multiples of 16MB (or the value of maxbpg).
Avoid sparse files (which prevent concurrent direct I/O) by pre-allocating datafile space (use auto-extension as a safety net only), and copy tempfiles with OS tools before use.