May 052014

There’s a neat optimization in Oracle I found while tinkering around (look closely at the predicate section):

select oracle_can_now_use,
from   queries_with_no_functions
where  a_date_col_with_an_fbi = :a_date
and    oracle_version >= '';

| Id  | Operation                   | Name                      | Rows  |
|   0 | SELECT STATEMENT            |                           |     1 |
|*  2 |   INDEX RANGE SCAN          | QUWF_DATE_FBI             |     1 |

Predicate Information (identified by operation id):


Line 2 of the predictates states that the INDEX RANGE SCAN was accessed via TRUNC(INTERNAL_FUNCTION()) – a function-based index was used when no function is present in the where clause!

This optimization appeared in (fix 9263333) – I have to thank the people of Oracle-L (particularly Velikikh Mikhail) identifying when this improvement came in.

This optimization isn’t restricted to TRUNC() on date columns however.
Continue reading »

Apr 072014

So far in the joins series we’ve looked at the effect removing joins (via denormalization) has on performance. We’ve seen that joins can cause primary key looks to do more work. Lowering the normalization level to remove these can negatively impact “search” style queries though. More importantly, we’ve seen the real cost of denormalizing to remove joins is when updating records, potentially leading to concurrency waits and application bugs.

So are joins always “good”?

The fastest way to do anything is to not do it at all. If joins aren’t necessary to answer your queries, including them will add some overhead. Also, like any tool, there’s situations where adding a join may substantially slow your query down.

Here’s some examples where joins may be “expensive” and strategies for coping with them.
Continue reading »

Apr 042014

In the previous article in the joins series we compared query performance between a third normal form schema and the same schema denormalized to second normal form. We then extended it the example so our denormalized schema was in just first normal form.

The normalized approach performed better overall. The differences were small though – generally just a few consistent gets and all the queries executed in under a second. As Jeff Atwood points out, hardware is powerful enough that for most systems the performance between more normalized and less normalized schemas won’t make much difference to your queries. I disagree slightly with one of his conclusions though – that normalziation “doesn’t matter”.

To see why, let’s look at a different use-case: data modification. While this isn’t to do with joins directly, it is a very important consideration when deciding to denormalize to “remove joins”.

I’m not going to get into the details of the performance updates – there’s a much bigger problem waiting for us. Can you see what it is?
Continue reading »

Mar 312014

Continuing the series on joins, I’m going to look at denormalization. This process reduces the number of joins necessary to return results for a schema.

One of the big arguments against normalizing data is “for performance”. The process of normalization creates new tables as relations are decomposed according to their functional dependencies. This means (more) joins are necessary to return the same results.

A google of “database normalization performance” turns up several articles like this, this and this all advocating denormalizing your data to improve performance. There’s not a concrete discussion or test cases showing why you should denormalize, just hand-wavy arguments about joins being bad.

I wanted to test this to see if normalizing really makes performance worse. If you’ve been preaching “you must denormalize for performance”, my conclusions may surprise you.
Continue reading »

Dec 092013

A compound trigger curiousity I found recently:

  forall triggers in 
      compound.first .. compound.last
    insert into only_fire_once (
    ) values (

  forall triggers in 
      compound.first .. compound.last
    update firing
    set    trigger_execution = 
             'once per array entry'
    where  we_equal_the = compound(triggers);

For some reason, compound triggers only fire once for the statement when using forall .. insert (regardless of number of records in the array), but once per array entry for forall .. update statements! For further discussion and examples, keep reading. Continue reading »

Sep 162013
create cluster your_data_together (
  for_super_efficient_access integer

create table when_i_am_frequently_queried (
  and_performance_matters    varchar2(1) 
                               default 'Y',
  for_super_efficient_access integer
cluster your_data_together (

create table and_joined_to_me (
  clustering_places_us_in    integer,
  the_same_physical_location integer,
  for_super_efficient_access integer
cluster your_data_together (
Not to be confused with RAC (real application clusters), table clusters can 
be used to locate records with a common key into one place. This saves some
space as the cluster key is only stored once for all the tables in it. The 
bigger advantage is that all the tables in a cluster are physically located 
in the same place, so joins between these tables require fewer I/O operations 
than the equivalent heap tables.

If you’d like to know more, then have a look at the documentation or Iggy Fernandez has an example here (scroll to the “Appendix” section)

Mar 222013
with the_sql_monitor as (
  select a_great_way_to_see,
  from   your_painfully_slow_queries
  where  tuning_pack_license = 'Y'
  and    the_boss = 'screaming for the result'
select * 
from   the_sql_monitor
where  real_time_execution_plans = 
           'taking !$@# forever to run';
This really is an outstanding feature to help track the progress of that 4 hour SQL query,
showing which part of the execution plan is chewing up all the time so you know where to direct your tuning efforts

For a great description on using the SQL monitor, have a read of this oracle base article.

Mar 182013
create profile highly_restricted limit 
  cpu_per_session 1
  sessions_per_user 1
  logical_reads_per_session 10
  logical_reads_per_call 10;

alter user cowboy_developer 
profile highly_restricted;

Profiles have been superceded by the DB Resource Manager for limiting resource usage, 
however profiles are still very useful for setting password timeouts, locks, etc.

To try this out yourself, use this script.

Feb 132013
select how_to_write_this_properly
from   index_breaking_predictates
where  trunc(i_want_all_the_times) =
         to_date (:on_this_date, 'dd/mm/yyyy')
and    the_numbers + 1 = :one_less_than_this
and    instr(text, :starting_with_this) = 1;

where i_want_all_the_times >= 
    to_date (:on_this_date, 'dd/mm/yyyy')
and i_want_all_the_times < 
    to_date (:on_this_date, 'dd/mm/yyyy') + 1 
and the_numbers = :one_less_than_this - 1
and text like :with_this_at_the_start || '%'
While you could create function-based indexes to satisfy these conditions, they will be less reusable.
trunc(date) is a common offender in my experience, with indexes like this preventing "time-of-day" query indexing
Feb 112013
create table physically_ordered_data (
  can_vastly_reduce_your_io   integer,
  and_reduce_buffer_cache_use integer,
  when_querying               varchar2(1) 
                              default 'Y',
  constraint on_the_leading_columns_of_the
  primary key (
  just_define_your_table_as   integer
) organization index;
IOTs are woefully under-used in Oracle. While there are some provisos, 
the benefits can be great when fetching several rows from a table via the first columns in the primary key

Martin Widlake has an excellent series on IOTs explaining how they work and their benefits and caveats. I thoroughly recommend you read it.