Posts by saxon:
Same concept, different theme. All the SQL statements below are clues to the English spelling of countries.
Can you get them all?
As always, put your answers in the comments!
select * from german_affirmative join cooking_vessel; select * from nuts where anaphylaxis = 'sexually transmitted'; select * from olde_english_the join males; select * from elements where atomic_number = 47; select * from programming_languages where able_to = 'influence PL/SQL'; select * from places where H2O = 'frozen';
Today celebrates one year of the database design quiz on the PL/SQL Challenge!
Here’s a look back at some of the stats for the quiz to date with some trivia related to the numbers :)
select count(*) from quizzes; COUNT(*) -------- 52 -- The number of white keys on a piano -- the smallest number with 4 representations -- as the sum of 4 positive squares select count(*) from players; COUNT(*) -------- 691 -- The mass (in kilos) of a typical dairy cow -- ~7 times the average adult body power -- consumption (in watts) select count(*) from answers_submitted; COUNT(*) -------- 13723 -- Approximately the length of the Great Wall -- of China (in miles) and the mass of -- a large dinosaur (in stones) select name, score from (select name, score from players order by score desc ) where rownum <= 5; NAME SCORE ---------------- ----- mentzel.iudith 20618 Justin Cave 20038 Joaquin Gonzalez 19284 Krzysztof Helbin 19242 Elic 19127 -- Iudith and Justin are doing a great job, -- the only players to break 20,000 to date! select avg(rating) from player_surveys; AVG(RATING) ----------- 4.13 -- 50 times the half life of uranium 241 -- (in hours) -- Great to see the average rating above 4 :) -- I have to thank my reviewers, particularly -- Yuan Tschang, for helping keep the quality -- high!
Thanks to everyone who’s taken part, pointed out my mistakes and added to the discussions for the quiz over the past year.
If you’ve not played yet, today’s a great time to get started. Just head over to www.plsqlchallenge.com, register for a free account and get playing!
I insert a row into a table, roll it back, yet somehow I’ve created a new object in my database!
select count(*) from user_objects; COUNT(*) ---------- 2 insert into a_table values ('that create', 1, 'new object'); rollback; select count(*) from user_objects; COUNT(*) ---------- 3
There’s no triggers or other “trickery” involved – the code above is a copy-paste in a database where I have the only session.
How did this happen? Put your answers in the comments!
If you’d like to get more posts on using and designing Oracle databases, enter your details the form below to receive future updates.
There’s a neat optimization in Oracle I found while tinkering around (look closely at the predicate section):
select oracle_can_now_use, a_function_based_index from queries_with_no_functions where a_date_col_with_an_fbi = :a_date and oracle_version >= '188.8.131.52'; ------------------------------------------------------------------------- | Id | Operation | Name | Rows | ------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | |* 1 | TABLE ACCESS BY INDEX ROWID| QUERIES_WITH_NO_FUNCTIONS | 1 | |* 2 | INDEX RANGE SCAN | QUWF_DATE_FBI | 1 | ------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("ORACLE_VERSION">='184.108.40.206' AND "A_DATE_COL_WITH_AN_FBI"=:A_DATE) 2 - access(TRUNC(INTERNAL_FUNCTION("A_DATE_COL_WITH_AN_FBI"))=TRUNC(:A_DATE))
Line 2 of the predictates states that the INDEX RANGE SCAN was accessed via TRUNC(INTERNAL_FUNCTION()) – a function-based index was used when no function is present in the where clause!
This optimization appeared in 220.127.116.11 (fix 9263333) – I have to thank the people of Oracle-L (particularly Velikikh Mikhail) identifying when this improvement came in.
This optimization isn’t restricted to TRUNC() on date columns however.
Over the past few articles we’ve looked at database joins. It started out with me noticing that joins appear to be getting bad press recently and wondering whether they really are the root of all evil that some people seem to think they are.
We seen that denormalizing removes joins, giving performance benefits to primary key lookups. This benefit can come at the cost of non-primary key lookup queries though – often the extra work for these “search” queries is outweights the gains made for the primary key lookups.
Query performance (particularly for “small” systems) is just part of the story though. The real cost of removing joins is the impact on data modification. Aside from (potentially) vastly increasing the number of records must update to keep data consistent, denormalization can introduce waiting issues for concurrent updates. In the worst case we may introduce application bugs in the form of deadlocks.
We’ve seen there are some cases where joins do result in inefficient queries. These are usually the result of poor design decisions or limitations of the optimizer. Better design and use of Oracle features can overcome the worst of many of these however.
I hope you’ve found this series (and blog as a whole!) useful and informative. If there’s anything else you’d like to see on the subject of joins get in touch or say in the comments.
I’m going to take a break from regular blogging for a while to work on other projects. Thanks to those of you who’ve been following. If you’d like to receive posts when I start again, just enter your email address in the form below!
So far in the joins series we’ve looked at the effect removing joins (via denormalization) has on performance. We’ve seen that joins can cause primary key looks to do more work. Lowering the normalization level to remove these can negatively impact “search” style queries though. More importantly, we’ve seen the real cost of denormalizing to remove joins is when updating records, potentially leading to concurrency waits and application bugs.
So are joins always “good”?
The fastest way to do anything is to not do it at all. If joins aren’t necessary to answer your queries, including them will add some overhead. Also, like any tool, there’s situations where adding a join may substantially slow your query down.
Here’s some examples where joins may be “expensive” and strategies for coping with them.
In the previous article in the joins series we compared query performance between a third normal form schema and the same schema denormalized to second normal form. We then extended it the example so our denormalized schema was in just first normal form.
The normalized approach performed better overall. The differences were small though – generally just a few consistent gets and all the queries executed in under a second. As Jeff Atwood points out, hardware is powerful enough that for most systems the performance between more normalized and less normalized schemas won’t make much difference to your queries. I disagree slightly with one of his conclusions though – that normalziation “doesn’t matter”.
To see why, let’s look at a different use-case: data modification. While this isn’t to do with joins directly, it is a very important consideration when deciding to denormalize to “remove joins”.
I’m not going to get into the details of the performance updates – there’s a much bigger problem waiting for us. Can you see what it is?
Continuing the series on joins, I’m going to look at denormalization. This process reduces the number of joins necessary to return results for a schema.
One of the big arguments against normalizing data is “for performance”. The process of normalization creates new tables as relations are decomposed according to their functional dependencies. This means (more) joins are necessary to return the same results.
A google of “database normalization performance” turns up several articles like this, this and this all advocating denormalizing your data to improve performance. There’s not a concrete discussion or test cases showing why you should denormalize, just hand-wavy arguments about joins being bad.
I wanted to test this to see if normalizing really makes performance worse. If you’ve been preaching “you must denormalize for performance”, my conclusions may surprise you.
Joins appear (to me) to be getting a lot of bad press recently. In discussions I’ve had and articles I’ve read many give the position that joins are somehow inherently bad and to be avoided at all costs.
I’ve never been entirely convinced by the “joins are bad” arguments however. Partly because there’s few concrete cases actually demonstrating the (additional) cost of joins. Instead discussions tend to be hand-wavy arguments around extra CPU cycles used or something to that effect.
So over the next few posts I’m going to do something a bit different on this blog and discuss joins in more detail. We’ll ask questions like:
- Does removing joins (denormalizing) really help performance? If so, what are the catches?
- When are joins bad? What can be done in these cases?
If you’ve got any other questions around joins let me know – I’ll add them to the list and address them in this series.
The following update forms the basis of a DIY connection pool. It is intended to allocate an unused session to whoever requests it:
declare sess connection_pool.session_name%type; begin update connection_pool set in_use = 'Y', last_used = systimestamp where rowid in ( select rowid from ( select rowid from connection_pool where in_use = 'N' order by last_used ) where rownum = 1 ) returning session_name into sess; dbms_output.put_line ( sess ); end; /
There’s a serious flaw with this approach however – can you spot it?
Answers in the comments!