Jun 032014
 

I insert a row into a table, roll it back, yet somehow I’ve created a new object in my database!

select count(*) from user_objects;

  COUNT(*)
----------
         2

insert into a_table
values ('that create', 1, 'new object');

rollback;

select count(*) from user_objects;

  COUNT(*)
----------
         3

There’s no triggers or other “trickery” involved – the code above is a copy-paste in a database where I have the only session.

How did this happen? Put your answers in the comments!

If you’d like to get more posts on using and designing Oracle databases, enter your details the form below to receive future updates.

May 052014
 

There’s a neat optimization in Oracle I found while tinkering around (look closely at the predicate section):

select oracle_can_now_use,
       a_function_based_index
from   queries_with_no_functions
where  a_date_col_with_an_fbi = :a_date
and    oracle_version >= '11.2.0.2';

-------------------------------------------------------------------------
| Id  | Operation                   | Name                      | Rows  |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                           |     1 |
|*  1 |  TABLE ACCESS BY INDEX ROWID| QUERIES_WITH_NO_FUNCTIONS |     1 |
|*  2 |   INDEX RANGE SCAN          | QUWF_DATE_FBI             |     1 |
-------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ORACLE_VERSION">='11.2.0.2' AND "A_DATE_COL_WITH_AN_FBI"=:A_DATE)
   2 - access(TRUNC(INTERNAL_FUNCTION("A_DATE_COL_WITH_AN_FBI"))=TRUNC(:A_DATE))

Line 2 of the predictates states that the INDEX RANGE SCAN was accessed via TRUNC(INTERNAL_FUNCTION()) – a function-based index was used when no function is present in the where clause!

This optimization appeared in 11.2.0.2 (fix 9263333) – I have to thank the people of Oracle-L (particularly Velikikh Mikhail) identifying when this improvement came in.

This optimization isn’t restricted to TRUNC() on date columns however.
Continue reading »

Apr 112014
 

Over the past few articles we’ve looked at database joins. It started out with me noticing that joins appear to be getting bad press recently and wondering whether they really are the root of all evil that some people seem to think they are.

We seen that denormalizing removes joins, giving performance benefits to primary key lookups. This benefit can come at the cost of non-primary key lookup queries though – often the extra work for these “search” queries is outweights the gains made for the primary key lookups.

Query performance (particularly for “small” systems) is just part of the story though. The real cost of removing joins is the impact on data modification. Aside from (potentially) vastly increasing the number of records must update to keep data consistent, denormalization can introduce waiting issues for concurrent updates. In the worst case we may introduce application bugs in the form of deadlocks.

We’ve seen there are some cases where joins do result in inefficient queries. These are usually the result of poor design decisions or limitations of the optimizer. Better design and use of Oracle features can overcome the worst of many of these however.

I hope you’ve found this series (and blog as a whole!) useful and informative. If there’s anything else you’d like to see on the subject of joins get in touch or say in the comments.

I’m going to take a break from regular blogging for a while to work on other projects. Thanks to those of you who’ve been following. If you’d like to receive posts when I start again, just enter your email address in the form below!

Apr 072014
 

So far in the joins series we’ve looked at the effect removing joins (via denormalization) has on performance. We’ve seen that joins can cause primary key looks to do more work. Lowering the normalization level to remove these can negatively impact “search” style queries though. More importantly, we’ve seen the real cost of denormalizing to remove joins is when updating records, potentially leading to concurrency waits and application bugs.

So are joins always “good”?

The fastest way to do anything is to not do it at all. If joins aren’t necessary to answer your queries, including them will add some overhead. Also, like any tool, there’s situations where adding a join may substantially slow your query down.

Here’s some examples where joins may be “expensive” and strategies for coping with them.
Continue reading »

Apr 042014
 

In the previous article in the joins series we compared query performance between a third normal form schema and the same schema denormalized to second normal form. We then extended it the example so our denormalized schema was in just first normal form.

The normalized approach performed better overall. The differences were small though – generally just a few consistent gets and all the queries executed in under a second. As Jeff Atwood points out, hardware is powerful enough that for most systems the performance between more normalized and less normalized schemas won’t make much difference to your queries. I disagree slightly with one of his conclusions though – that normalziation “doesn’t matter”.

To see why, let’s look at a different use-case: data modification. While this isn’t to do with joins directly, it is a very important consideration when deciding to denormalize to “remove joins”.

I’m not going to get into the details of the performance updates – there’s a much bigger problem waiting for us. Can you see what it is?
Continue reading »

Mar 312014
 

Continuing the series on joins, I’m going to look at denormalization. This process reduces the number of joins necessary to return results for a schema.

One of the big arguments against normalizing data is “for performance”. The process of normalization creates new tables as relations are decomposed according to their functional dependencies. This means (more) joins are necessary to return the same results.

A google of “database normalization performance” turns up several articles like this, this and this all advocating denormalizing your data to improve performance. There’s not a concrete discussion or test cases showing why you should denormalize, just hand-wavy arguments about joins being bad.

I wanted to test this to see if normalizing really makes performance worse. If you’ve been preaching “you must denormalize for performance”, my conclusions may surprise you.
Continue reading »

Mar 282014
 

Joins appear (to me) to be getting a lot of bad press recently. In discussions I’ve had and articles I’ve read many give the position that joins are somehow inherently bad and to be avoided at all costs.

I’ve never been entirely convinced by the “joins are bad” arguments however. Partly because there’s few concrete cases actually demonstrating the (additional) cost of joins. Instead discussions tend to be hand-wavy arguments around extra CPU cycles used or something to that effect.

So over the next few posts I’m going to do something a bit different on this blog and discuss joins in more detail. We’ll ask questions like:

  • Does removing joins (denormalizing) really help performance? If so, what are the catches?
  • When are joins bad? What can be done in these cases?

If you’ve got any other questions around joins let me know – I’ll add them to the list and address them in this series.

Mar 242014
 

The following update forms the basis of a DIY connection pool. It is intended to allocate an unused session to whoever requests it:

declare

  sess connection_pool.session_name%type;
  
begin
  
  update connection_pool
  set    in_use = 'Y',
         last_used = systimestamp
  where  rowid in (
      select rowid 
      from   (
          select rowid
          from   connection_pool
          where  in_use = 'N'
          order  by last_used
      )
      where  rownum = 1
  )
  returning session_name 
  into      sess;
  
  dbms_output.put_line ( sess );
  
end;
/

There’s a serious flaw with this approach however – can you spot it?

Answers in the comments!

Mar 212014
 

Back on the films today, here’s another six to get:

select talk
from   monarch;

select * 
from   (select city_name
        from   cities
        where  state = 'IL'
        order  by population desc)
where  rownum = 1;

select * 
from   nations
where  not exists (
         select null
         from   people
         where  status = 'retired'
         and    sex = 'male'
         and    p.location = n.location
);

sqlplus -s young_ovis_aries

select key
from   injury;

select *
from   human_property
where  duration between date '2000-01-01'
                and date '2012-12-31';

Finally, there’s something that links all the above films. Can you get it?

Put your answers in the comments!

Mar 172014
 
create table from_12c_onwards (
  you_can_have_your_pk number 
    generated always as identity,
  so_you_dont_have_to_type_it varchar2(10)
);

insert into from_12c_onwards (
  so_you_dont_have_to_type_it
) values (
  'yes'
);

insert into from_12c_onwards (
  so_you_dont_have_to_type_it
) values (
  'Yes'
);

insert into from_12c_onwards (
  so_you_dont_have_to_type_it
) values (
  'YES!'
);

select * from from_12c_onwards;

YOU_CAN_HAVE_YOUR_PK SO_YOU_DON
-------------------- ----------
                   1 yes
                   2 Yes
                   3 YES!

If you’d like more details, there’s a nice article on oracle-base discussing how identity columns work. If you prefer, you can assign a sequence to be a column default instead, as discussed here.