Tuesday, September 20, 2011

All About Temporary Tablespace Part I

There are lots of confusion about the temporary tablespaces.Here i have tried to covered the basic of temporary tablespace and one of the famous related error i.e; ORA-1652 says "unable to extend temp segment" . This problem can be solved as following : 
1.)  Increase the size of  temporary tablespace either by resizing the tempfile or by adding the tempfile.
2.) Check the SQL statements which is consuming the large temp tablespace and kill the corresponding session.(not proper solution).
3.) Check the SQL and tuned it.

Here i have explain what we can do when our database runs out of space.

Introduction : 
Temporary tablespaces are used to manage space for database sort operations and for storing global temporary tables. For example, if we join two large tables, and Oracle cannot do the sort in memory (see sort_area_size initialisation parameter), space will be allocated in a temporary tablespace for doing the sort operation. Other SQL operations that might require disk sorting are: create Index , Analyse, Select Distinct, Order By, Group By, Union, Intersect , Minus, Sort-Merge joins, etc.

A temporary tablespace contains transient data that persists only for the duration of the session. Temporary tablespaces can improve the concurrency of multiple sort operations, reduce their overhead, and avoid Oracle Database space management operations. Oracle can also allocate temporary segments for temporary tables and indexes created on temporary tables. Temporary tables hold data that exists only for the duration of a transaction or session. Oracle drops segments for a transaction-specific temporary table at the end of the transaction and drops segments for a session-specific temporary table at the end of the session. If other transactions or sessions share the use of that temporary table, the segments containing their data remain in the table. Here, we will cover the following points in next link : 

1.) How Oracle managed sorting operations
2.) How DBA determines and handle the database when temporary tablespace running out of space

Click Here for next Part II :

For more detail about temporary tablespace  Click Here



Monday, September 19, 2011

Drop all object of Schemas


Sometimes we need to drop the all objects of schemas while importing. The basic approach is drop the schemas and recreate the schemas. This method is quite efficient . To perform this operation we need the system or sysdba privileges to create and drop the user. If anyone have not the system and sysdba privileges then, dropping all objects is the option . Below is the Demo where we will drop all objects of "SCOTT" schemas.

Step 1 : Generate the scripts for dropping the schemas :

SQL> spool C:\genera_dropall.sql
SQL> select 'drop '||object_type||' '|| object_name||  DECODE(OBJECT_TYPE,'TABLE',' CASCADE CONSTRAINTS;',';')  from user_objects;

'DROP'||OBJECT_TYPE||''||OBJECT_NAME||DECODE(OBJECT_TYPE,'TABLE','CASCADECONSTRAINTS;',';')          
------------------------------------------------------------------------------------------------------------
drop TABLE SALGRADE CASCADE CONSTRAINTS;                                                                                                                                                        
drop TABLE BONUS CASCADE CONSTRAINTS;                                                                                                                                                        
drop INDEX PK_EMP;                                                                                                                                                                      
drop TABLE EMP CASCADE CONSTRAINTS;                                                                                                                                        
drop TABLE DEPT CASCADE CONSTRAINTS;                                                                                                                                                    
drop INDEX PK_DEPT;                                                                                                                                                                        
6 rows selected.
Since we have the drop script as 'genera_dropall.sql'

Step  2  :  Now we will drop all_objects i.e, 'genera_dropall.sql' script is used to drop everything in schemas.

SQL>@"genera_dropall.sql" 
Now check the object in scott schemas

SQL> select * from tab;
TNAME                                                       TABTYPE           CLUSTERID
------------------------------                               ------------             --------------
BIN$5JksbkFeSai/0JTwbJOenQ==$0           TABLE
BIN$KtthiEIaRZmkv/5+FoYu5A==$0              TABLE
BIN$L/qcqzTxTsm8XHkDrfANOg==$0          TABLE
BIN$opUCTunxRf+0AUbhOzzBgw==$0        TABLE

The SQL was written against Oracle (hence the "purge recyclebin" at the bottom and the exclusion of objects already in the recycle bin from the "drop" loop).

Step  3  : Purge the recyclebin objects.

SQL>purge recyclebin ;
Recyclebin purged.

This will produce a list of drop statements. Not all of them will execute - if we drop with cascade, dropping the PK_* indices will fail. But in the end, you will have a pretty clean schema. Confirm with: 

SQL> select * from user_objects;
no rows selected


Enjoy    :-) 


Wednesday, September 14, 2011

ORA-01555: Snapshot Too Old

UNDO is  the  backbone  of  the  READ CONSISTENCY  mechanism  provided  by  Oracle. Multi-User Data  Concurrency  and  Read  Consistency  mechanism  make  Oracle  stand  tall  in  Relational  Database Management  Systems  (RDBMS) world .

Best  of  all,  automatic  undo  management  allows  the  DBA  to  specify  how  long  undo  information should  be  retained  after  commit, preventing  “snapshot too old”   errors  on  long  running  queries. This  is  done  by  setting  the UNDO_RETENTION  parameter.  The   default  is  900  seconds (5 minutes), and  we  can set  this  parameter  to  guarantee  that  Oracle  keeps undo  logs  for  extended  periods  of time. The  flashback  query  can  go  upto  the  point  of  time specified as a value in  the UNDO_RETENTION parameter.

Why ORA-01555 error occur

1.) Oracles  does  this  by reading  the  "before image"  of  changed  rows  from  the  online  undo  segments.  If  we  have  lots  of  updates, long running SQL  i.e , rollback records  needed  by  a  reader  fo  consistent read are overwritten by other writers.

2.) It may also due small size of undo and small undo_retention period .

To  solve  this  issues  we need  to  increase the  undo  tablepsace  and  undo  retention  period. Now  the issue  is  how  much  should  be  the  optimal value of  undo  retention and undo tablespace. For this we use the advisor. By using OEM, it is quite easy to estimate the size and time duration of undo.

Calculate Optimal Undo_Retention  :
The following query will help us to optimize the UNDO_RETENTION parameter :


Optimal Undo Retention = Actual Undo Size / (DB_BLOCK_SIZE × UNDO_BLOCK_REP_SEC)

To calculate Actual Undo Size :

SQL> SELECT SUM(a.bytes)/1024/1024 "UNDO_SIZE_MB"
            FROM v$datafile a, v$tablespace b,dba_tablespaces c
           WHERE c.contents = 'UNDO'
           AND c.status = 'ONLINE'
           AND b.name = c.tablespace_name
           AND a.ts# = b.ts#;

Undo Blocks per Second : 

SQL> SELECT MAX(undoblks/((end_time-begin_time)*3600*24)) "UNDO_BLOCK_PER_SEC"
FROM v$undostat ;

DB Block Size  :

SQL> SELECT TO_NUMBER(value) "DB_BLOCK_SIZE [Byte]"   FROM   v$parameter
  WHERE name = 'db_block_size';

We can do all in one query as

SQL> SELECT d.undo_size/(1024*1024) “ACT_UNDO_SIZE [MB]“, 
              SUBSTR(e.value,1,25) “ UNDO_RTN [Sec] “, 
              ROUND((d.undo_size / (to_number(f.value) * 
              g.undo_block_per_sec))) “OPT_UNDO_RET[Sec]” 
       FROM ( 
                       SELECT SUM(a.bytes) undo_size 
                        FROM v$datafile a, 
                         v$tablespace b, 
                        dba_tablespaces c 
                        WHERE c.contents = 'UNDO' 
                       AND c.status = 'ONLINE' 
                       AND b.name = c.tablespace_name 
                       AND a.ts# = b.ts# 
                       ) d, 
                            v$parameter e, 
                             v$parameter f, 

       SELECT MAX(undoblks/((end_time-begin_time)*3600*24)) 
             undo_block_per_sec 
         FROM v$undostat 
       ) g 
              WHERE e.name = 'undo_retention' 
              AND f.name = 'db_block_size'
/
ACT_UNDO_SIZE [MB]         UNDO_RTN [Sec]              OPT_UNDO_RET[Sec]

------------------------             ----------------------            ----------------------------
                      50                                  900                                              24000

Calculate Needed UNDO Size :
If  we  are  not  limited  by  disk  space, then  it would  be  better  to  choose  the  UNDO_RETENTION time that is best for us (for FLASHBACK, etc.). Allocate the appropriate size to the UNDO tablespace according to the database activity :                                                                                                                                                              

Formula :Undo Size = Optimal Undo Retention × DB_BLOCK_SIZE × UNDO_BLOCK_PER_SEC



Here again we can find in a single :

SQL> SELECT d.undo_size/(1024*1024) "ACTUAL UNDO SIZE [MByte]",
       SUBSTR(e.value,1,25) "UNDO RETENTION [Sec]",
       (TO_NUMBER(e.value) * TO_NUMBER(f.value) *
       g.undo_block_per_sec) / (1024*1024)
      "NEEDED UNDO SIZE [MByte]"
  FROM (
       SELECT SUM(a.bytes) undo_size
         FROM v$datafile a,
              v$tablespace b,
              dba_tablespaces c
        WHERE c.contents = 'UNDO'
          AND c.status = 'ONLINE'
          AND b.name = c.tablespace_name
          AND a.ts# = b.ts#
       ) d,
      v$parameter e,
       v$parameter f,
       (
       SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
         undo_block_per_sec
         FROM v$undostat
       ) g
 WHERE e.name = 'undo_retention'
  AND f.name = 'db_block_size'
/

We can avoid ORA-01555  error as follows :

1.) Do not run discrete transactions while sensitive queries or transactions are running, unless we are confident that the data sets required are mutually exclusive.

2.) Schedule long running queries and transactions out of hours, so that the consistent gets will not need to rollback changes made since the snapshot SCN. This also reduces the work done by the server, and thus improves performance.

3.) Code long running processes as a series of restartable steps.

4.) Shrink all rollback segments back to their optimal size manually before running a sensitive query or transaction to reduce risk of consistent get rollback failure due to extent deallocation.

5.) Use a large optimal value on all rollback segments, to delay extent reuse.

6.) Don't fetch across commits. That is, don't fetch on a cursor that was opened prior to the last commit, particularly if the data queried by the cursor is being changed in the current session.

7.) Commit less often in tasks that will run at the same time as the sensitive query, particularly in PL/SQL procedures, to reduce transaction slot reuse.


Enjoy       :-)