select a_sequence.nextval, may_not_come_out_in_order from rac_databases; NEXTVAL MAY_NOT_COME_OUT_IN_ORDER ---------- ------------------------- 1 Y conn chris@rac_node2 select a_sequence.nextval, may_not_come_out_in_order from rac_databases; NEXTVAL MAY_NOT_COME_OUT_IN_ORDER ---------- ------------------------- 21 Y conn chris@rac_node1 select a_sequence.nextval, may_not_come_out_in_order from rac_databases; NEXTVAL MAY_NOT_COME_OUT_IN_ORDER ---------- ------------------------- 2 Y From time-to-time code is written to process items greater than or equal to the last value in a table based on a sequence (usually the primary key). Because sequence values are cached on each instance rather than cluster-wide, be prepared for this to break if you move to a RAC environment!
select * from fact_table; DESCRIPTION ---------------------------------------------- The central table in a star schema, storing measurements for the linked dimensions Hang on, surely all tables contain facts?! Only data warehouses are worthy of storing facts in apparently! Maybe that explains why "NoSQL style" eventual consistency is gaining popularity ;)
alter table i_know_my_data_is_rubbish_so enable novalidate constraint so_at_least_new_stuff_is_good; Creating constraints as novalidated can be a great way to start bringing order to chaos if you've inherited a database with no constraints. It can also be useful when business constraint have changed and you want to leave existing data violating the new constraint.
If you’d like an example of this in practice, just use this script.
begin select if_this_slow_query_takes, eighty_percent_of_the_runtime from your_cripplingly_slow_process; select there_is_no_point_spending, eighty_percent_of_tuning_time trying_to_make_gains from this_relatively_quick_query; end; / Of course, the first step is to figure out which query is consuming the most time in your process - a point well made by Cary Millsap in his "Optimizing Oracle Performance" book.
select convert('£','US7ASCII', 'WE8MSWIN1252') as "GBP?!", convert('€','US7ASCII', 'WE8MSWIN1252') as "EUR?!", convert('¥','US7ASCII', 'WE8MSWIN1252') as "YEN?!", convert('$','US7ASCII', 'WE8MSWIN1252') as "USD!" from where_did_all_my_symbols_go; GBP?! EUR?! YEN?! USD! ----- ----- ----- ---- A# a?? AY $ This may seem a bit of trivia, but the client will do its own conversions. So if you have people inserting into the database with character sets, you can find that script which worked fine in dev and test results in messed up data when someone else runs it in production. Can cause some "interesting" side effects if you have £, € or ¥ in your code!
create table baby as select /*+ parallel (9) */months from mothers; ORA-12827: insufficient parallel query slaves (requested 9, available 1, parallel_min_percent 9) 12827. 00000 - "insufficient parallel query slaves available" One for all the project managers to take note of - there's some tasks which simply can't be done any faster by throwing more people/resources at them ;)
insert into data_you_thought_was_gone select how_it_looked_before, the_users_hit_delete from data_you_thought_was_gone as of timestamp sysdate - interval '1' hour minus select how_it_looked_before as how_it_looks_now, the_users_hit_delete from data_you_thought_was_gone; Flashback to the rescue again! This approach will only work if you're notified of the data deletion before your undo retention runs out, otherwise you've got to go your backups. You may need to add further filtering if there's been updates to your data as well as deletions too. Still can come in very handy in certain scenarios though :)
alter table crumbly_cheese move tablespace caerphilly; You can't beat a classic cheese pun in SQL form ;)
select dm.name from data_models dm join data_model_values qc on dm.id = qc.model_id join data_model_values de on dm.id = de.model_id join data_model_values coa on dm.id = coa.model_id where qc.attr = 'Query complexity' and qc.val = 'Hideously mind-boggling' and de.attr = 'Datatype enforcement' and de.val = 'Non-existent' and coa.attr = 'Calculation of aggregates' and coa.val = 'Almost impossible'; NAME ---------------------- ENTITY ATTRIBUTE VALUE select name from data_models where query_complexity = 'Simple and straightforward' and datatype_enforcement = 'Strong and complete' and calculation_of_aggregates = 'As easy as group by'; NAME ---------- RELATIONAL While it's possible to end up with some dodgy queries and datatypes in a "normal" relational model, it's much easier to fix the problems when compared to EAV. If you've got any other comparisons between EAV and normal relational models, please add them in the comments!
To be fair, there are (very) few cases where EAV is appropriate. This provides a nice discussion of when you should/shouldn’t use this.
insert into joyous_memories select making_vast_improvements from that_horrible_query union all select designing_data_models from scalable_applications union all select heart_warming_smile from your_newborn_child; While there can be a lot of effort put in before getting these outcomes, the reward is worth it! :)