Exadata Performance tuning

Base on execution plan take the following steps

  • Improved offloading/smart scan by eliminating unnecessary indexing (simply change index to be invisible and then check execution plan,…, repeat step)
  • check and find the reason for “quarantines for sql”
  • Eliminate any reason for “reading extra block” during full table scan, like “read consistency, row chaining & migration, …
  • Set properly table storage parameters:
    • The small table threshold (_small_table_threshold parameter) is determining factor for full scans. (setting/calculation depending on version and it is relative size of segment to buffer cache).  Note: partition is segment
    •   set properly parameters for parallel processing & PX Adaptive offloading
    • force direct path read & Bloom Filter;
    • set properly parameters for parallel processing & PX Adaptive offloading;

That’s it


how to delete rows from large table

Solution depends on the following:

  1. Space availability?
  2. Is table partitioned?
  3. What percentage of the table will be deleted?
  4. Number of indexes?
  • Solution based on CTAS (create temporally table):

CREATE TABLE <temp table> AS


FROM <source table>

WHERE <filter>


DROP <source table>;

RENAME < temp table> to <source table>;

  • Solution based on rowid and blocks:

< in progress>