Thursday, April 7, 2011

Choosing the Proper Database Block Size


Choice  of  an  appropriate  Block size for a database  is  often  one of  the  most  important  decisions  that can  affect  the  performance of  the  system. Choosing  the  proper  block size requires a very  good  understanding of the application environment .
For example , in a particular scenario , When the 16k instance runs an 850,000 row update (no where clause), it finishes in 45 minutes.  When the 4k instance runs an 850,000 row update (no where clause), it finishes in 2.2 minutes. The change in block size caused the job to run TWENTY TIMES FASTER. So choosing a proper data block size is very necessary task


The proper choice of database block size depends to a large extent on the type of data involved and the application. The following guidelines can help in this process:

1.) Operating system block size :   Good performance can be achieved by ensuring that the Oracle block size is equal to or a multiple of the operating system block size. If this is not the case, the OS may end up performing extra reads and writes during the processing of Oracle blocks, resulting in wasted CPU cycles.

2.) Size of buffer cache used :   Larger database block size means that you are using up more memory for the same number of db_bock_size . This also means that more rows are cached. If our rows are small and we use a large block size, a block fetch will result in a lot of rows being fetched (and we may not be interested in all of them). The end result is that the OS is doing more work to fetch things that we don't need. On the other hand, if the row length is large, a large block size may prevent chaining.

3.) Balancing of index branches :  Large Oracle block sizes can result in better index balancing, as there are more rows in each branch. A bigger block size means more space for key storage in the branch nodes of B-tree indexes, which reduces index height and improves the performance of indexed queries.

4.) Datafile size limits :  The  block size leads to the max size of the database and  ultimately result into database size.Below the max size of datafiles based on the database block size.

db_block_size                  Maximum Datafile Size
2kb                                         8G
4kb                                         16G
8kb                                         32G
16kb                                       64G
32 kb                                      128G


Beyond this, i found a very good post by  Charles Hooper  about the optimal size of the data block size.  



Enjoy     :-) 



Estimating & Projecting the size of a table in Oracle

There are various method of estimating and projecting the size of a table . The best way to estimate database size is to load some sample data into our tables and then calculate statistics on those tables. Query DBA_TABLES for the AVG_ROW_LEN to get an idea of the average number of bytes for each row. We can use Tom Kyte’s (asktom.com) show_space code to help us with the #of blocks evaluation or just use a plain simple technique shown below.  Many people just use the average rowlength (avg_row_len column) in order to as certain the size after doing a CTAS (Create Table AS)…however, that is not accurate as we will show below:


Example : 
SQL> create table test as select * from all_objects ;
Table created.
SQL>Exec dbms_stats.gather_table_stats(user,'TEST') ;
SQL>select num_rows ,blocks,empty_blocks.avg_row_len from user_tables where table_name='TEST';
NUM_ROWS        BLOCKS       EMPTY_BLOCKS        AVG_ROW_LEN
----------              ------------     -------------------         -------------------
183546               2694                    0                           98
So, the average rowlength is being reported as 98 
The “right” and the only sure-shot way of calculating the size would be to calculate using these steps:


1.) We check on the existing schemas and that will give us the tables filled with representative data (say expected volume for one of the schemas for a large table is say 1 million rows – we check the # of blocks for n number of rows).
2.) Collect DBMS_STATS.
3.) Check the number of blocks.
4.) Then multiply by the multiplying factor i.e. if we estimated for say 1% of the actual requirement, multiply by 100 .
Example (Same table from above):
SQL> select extent_id, bytes, blocks  from user_extents   where segment_name = ‘TEST’  
                    and segment_type = ‘TABLE’ ;
EXTENT_ID  BYTES       BLOCKS
-------------   --------      -----------
       330      65536          8
       331      65536          8
       332      65536          8
       333      65536          8
       334      65536          8
                  -----
                  -----
       667      65536          8
       ------                    --------
Total (sumof block )      2696
SQL> select blocks, empty_blocks, avg_space, num_freelist_blocks  from user_tables  
                where table_name = ‘TEST’ ;
BLOCKS       EMPTY_BLOCKS     AVG_SPACE       NUM_FREELIST_BLOCKS
--------         ------------------       ---------------      ------------------------------
2694                   0                           0                                 0
So :
1.)  We have 2696 blocks allocated to the table TEST.
2.)  0 blocks are empty (of course – in this example that is bound to happen – not in reality though).
.3)  2694 blocks contain data (the other 2 are used by the system).
4.)  Average of 0k is free on each block used.
So,
1.) Our table TEST consumes 2694 blocks of storage in total for 183546 records.
2.) Out of this : 2694 * 8k blocksize – (2694 * 0k free) = 21552k is used for our data.
The calculation from the average row-length would have yielded : 183546 * 98 = ~17566k (see the difference ?).
Also, now that we have the calculation, if the actual table TEST needs to be sized for say 10 million records, then we use the multiplying factor for it :

183546 records take —> 21552 k
10 million will take —-> (21552k * 10 million) / 183546

That way, we will be assured that the data calculations are correct.  Likewise for the indexes.  Oracle Guru Tom Kyte has a lot of very good examples on his site that we should read before we embark on our sizing calculators for an Oracle Schema.


Enjoy    :-)

Wednesday, April 6, 2011

How to find startup & shutdown time of Oracle Database


Sometimes, we have to determine the startup and shutdown history of a database . There is no any data-dictionary  tables which contains the history of startup and shutdown time . Sometimes a system administrator reboot server in such cases we can determines the startup and shutdown time by checking the alert logfile. Since alert logfile keeps on increasing and we manages the alert logfile either by truncate or deleting its contains . 

Instead of depending on alert logfile , we can create table which contains the history of startup and shutdown by using the triggers . Here we will create two triggers i.e, first trigger will fired once the database is startup and second trigger is fired when database is shutdown . Let's have a  look . 

1.) Create a table to store history
SQL> create table db_history ( time date , event  varchar2(12)) ;
Table created.

2.) Create trigger for catching startup time 
SQL>create or replace trigger dbhist_start_trigr
after startup on database 
begin
insert into db_history values (sysdate , 'StartUp' ) ;
end ; 
/
Trigger created.

3.) Create Trigger to catch shutdown time 
SQL> create or replace trigger dbhist_shut_trigr
before shutdown on database
begin
insert into db_history values (sysdate, 'ShutDown' ) ;
end;
/
Trigger created.


Enjoy      :-)