Tuesday, September 20, 2011

All About Temporary Tablespace Part III

How DBA determines and handle the database when temporary tablespace running out of space.  Here,we have two techniques to find how space in temporaray tablespace is being used :

1.) Direct Oracle to log every statement that fails for lack of temporary space.
2.) A set of queries to run at any time to capture in real time how temporary space is currently being used on a per-session or per-statement basis.

Identifying SQL Statements that Fail Due to Lack of Temporary Space :
It is helpful that Oracle logs ORA-1652 errors to the instance alert log as it informs a DBA that there is a space issue. The error message includes the name of the tablespace in which the lack of space occurred, and a DBA can use this information to determine if the problem is related to sort segments in a temporary tablespace or if there is a different kind of space allocation problem.

Unfortunately, Oracle does not identify the text of the SQL statement that failed. However, Oracle does have a diagnostic event mechanism that can be used to give us more information whenever an ORA-1652 error occurs by causing Oracle server processes to write to a trace file. This trace file will contain a wealth of information,including the exact text of the SQL statement that was being processed at the time that the ORA-1652 error occurred. 

We can set a diagnostic event for the ORA-1652 error in our individual database session with the following statement:

SQL> alter session set events  '1652 trace name errorstack';

We can also set diagnostic events in another session (without affecting all sessions instance-wide) by using the “oradebug event” command in SQL*Plus.We can deactivate the ORA-1652 diagnostic event or remove all diagnostic event settings from the server parameter file with statements such as the following:

SQL> alter session set events '1652 trace name context off';

If a SQL statement fails due to lack of space in the temporary tablespace and the ORA-1652 diagnostic event has been activated, then the Oracle server process that encountered the error will write a trace file to the directory specified by the user_dump_dest instance parameter. 

The top portion of a sample trace file is as follows

*** ACTION NAME:() 2011-09-17 17:21:14.871
*** MODULE NAME:(SQL*Plus) 2011-09-17 17:21:14.871
*** SERVICE NAME:(SYS$USERS) 2011-09-17 17:21:14.871
*** SESSION ID:(130.13512) 2011-09-17 17:21:14.871
*** 2011-09-17 17:21:14.871
ksedmp: internal or fatal error
ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
Current SQL statement for this session:
SELECT "A1"."INVOICE_ID", "A1"."INVOICE_NUMBER", "A1"."INVOICE_DAT
E", "A1"."CUSTOMER_ID", "A1"."CUSTOMER_NAME", "A1"."INVOICE_AMOUNT",
"A1"."PAYMENT_TERMS", "A1"."OPEN_STATUS", "A1"."GL_DATE", "A1"."ITE
M_COUNT", "A1"."PAYMENTS_TOTAL"
FROM "INVOICE_SUMMARY_VIEW" "A1"
ORDER BY "A1"."CUSTOMER_NAME", "A1"."INVOICE_NUMBER"

From the trace file we can clearly see the full text of the SQL statement that failed. It is important to note that the statements captured in trace files with this method may not themselves be the cause of space issues in the temporary tablespace. For example, one query could run successfully and consume 99.9% of the temporary tablespace due to a Cartesian product, while a second query fails when trying to allocate just a small amount of sort space. The second query is the one that will get captured in a trace file, while the first query is more likely to be the root cause of the problem.



All About Temporary Tablespace Part II

Oracle sorting Basics
As we know there are different cases where oracle sorts data .Oracle session sorts the data in memory.If the amount of data being sorted is small enough, the entire sort will be completed in memory with no intermediate data written to disk.When Oracle needs to store data in a global temporary table or build a hash table for a hash join, Oracle also starts the operation in memory and completes the task without writing to disk if the amount of data involved is small enough.

If an operation uses up a threshold amount of memory, then Oracle breaks the operation into smaller ones that can each be performed in memory. Partial results are written to disk in a temporary tablespace. The threshold for how much memory may be used by any one session is controlled by instance parameters. If the  workarea_size_policy parameter is set to AUTO, then the pga_aggregate_target parameter indicates how much memory can be used collectively by all sessions for activities such as sorting and hashing. Oracle will automatically assess and decide how much of this memory any individual session should be allowed to use. If the workarea_size_policy parameter is set to MANUAL, then instance parameters such as sort_area_size, hash_area_size, and bitmap_merge_area_size dictate how much memory each session can use for these operations. Each database user has a temporary tablespace designated in their user definition(check through dba_users view) . Whenever a sort operation grows too large to be performed entirely in memory, Oracle will allocate space in the temporary tablespace designated for the user performing the operation. 

Temporary segments in temporary tablespaces which we will call “sort segments”— are owned by the SYS user, not the database user performing a sort operation. There typically is just one sort segment per temporary tablespace, because multiple sessions can share space in one sort segment. Users do not need to have quota on the temporary tablespace in order to perform sorts on disk. Temporary tablespaces can only hold sort segments. Oracle’s internal behavior is optimized for this fact. For example, writes to a sort segment do not generate redo or undo. Also, allocations of sort segment blocks to a specific session do not need to be recorded in the data dictionary or a file allocation bitmap. Why? Because data in a temporary tablespace does not need to persist beyond the life of the database session that created it.

One SQL statement can cause multiple sort operations, and one database session can have multiple SQL statements active at the same time—each potentially with multiple sorts to disk. When the results of a sort to disk are no longer needed, its blocks in the sort segment are marked as no longer in use and can be allocated to another sort operation.A sort operation will fail if a sort to disk needs more disk space and there are 
1.) No unused blocks in the sort segment,and
2.) No space available in the temporary tablespace for the sort segment to allocate an additional extent.

This will most likely cause the statement that prompted the sort to fail with the Oracle error, “ORA-1652: unable to extend temp segment.” This error message also gets logged in the alert log for the instance.It is important to note that not all ORA-1652 errors indicate temporary tablespace issues. For example, moving a table to a different tablespace with the ALTER TABLE…MOVE statement will cause an ORA-1652 error if the target tablespace does not have enough space for the table.

Temporary tablespaces will appear full after a while in a normally running database. Extents are not de-allocated after being used. Rather it would be managed internally and reused. This is normal and to be expected and is not an indication that we do not have any temporary space. If we are not encountering any issue/error related to TEMP then we don't need to worry about this.

There is no quick way or scientific approach to calculate the required TEMP tablespace size. The only way to estimate the required TEMP tablespace size is regressive testing.The information inside the temporay segment gets released, not the segment itself.


Click Here for Next Part III


All About Temporary Tablespace Part I

There are lots of confusion about the temporary tablespaces.Here i have tried to covered the basic of temporary tablespace and one of the famous related error i.e; ORA-1652 says "unable to extend temp segment" . This problem can be solved as following : 
1.)  Increase the size of  temporary tablespace either by resizing the tempfile or by adding the tempfile.
2.) Check the SQL statements which is consuming the large temp tablespace and kill the corresponding session.(not proper solution).
3.) Check the SQL and tuned it.

Here i have explain what we can do when our database runs out of space.

Introduction : 
Temporary tablespaces are used to manage space for database sort operations and for storing global temporary tables. For example, if we join two large tables, and Oracle cannot do the sort in memory (see sort_area_size initialisation parameter), space will be allocated in a temporary tablespace for doing the sort operation. Other SQL operations that might require disk sorting are: create Index , Analyse, Select Distinct, Order By, Group By, Union, Intersect , Minus, Sort-Merge joins, etc.

A temporary tablespace contains transient data that persists only for the duration of the session. Temporary tablespaces can improve the concurrency of multiple sort operations, reduce their overhead, and avoid Oracle Database space management operations. Oracle can also allocate temporary segments for temporary tables and indexes created on temporary tables. Temporary tables hold data that exists only for the duration of a transaction or session. Oracle drops segments for a transaction-specific temporary table at the end of the transaction and drops segments for a session-specific temporary table at the end of the session. If other transactions or sessions share the use of that temporary table, the segments containing their data remain in the table. Here, we will cover the following points in next link : 

1.) How Oracle managed sorting operations
2.) How DBA determines and handle the database when temporary tablespace running out of space

Click Here for next Part II :

For more detail about temporary tablespace  Click Here



Monday, September 19, 2011

Drop all object of Schemas


Sometimes we need to drop the all objects of schemas while importing. The basic approach is drop the schemas and recreate the schemas. This method is quite efficient . To perform this operation we need the system or sysdba privileges to create and drop the user. If anyone have not the system and sysdba privileges then, dropping all objects is the option . Below is the Demo where we will drop all objects of "SCOTT" schemas.

Step 1 : Generate the scripts for dropping the schemas :

SQL> spool C:\genera_dropall.sql
SQL> select 'drop '||object_type||' '|| object_name||  DECODE(OBJECT_TYPE,'TABLE',' CASCADE CONSTRAINTS;',';')  from user_objects;

'DROP'||OBJECT_TYPE||''||OBJECT_NAME||DECODE(OBJECT_TYPE,'TABLE','CASCADECONSTRAINTS;',';')          
------------------------------------------------------------------------------------------------------------
drop TABLE SALGRADE CASCADE CONSTRAINTS;                                                                                                                                                        
drop TABLE BONUS CASCADE CONSTRAINTS;                                                                                                                                                        
drop INDEX PK_EMP;                                                                                                                                                                      
drop TABLE EMP CASCADE CONSTRAINTS;                                                                                                                                        
drop TABLE DEPT CASCADE CONSTRAINTS;                                                                                                                                                    
drop INDEX PK_DEPT;                                                                                                                                                                        
6 rows selected.
Since we have the drop script as 'genera_dropall.sql'

Step  2  :  Now we will drop all_objects i.e, 'genera_dropall.sql' script is used to drop everything in schemas.

SQL>@"genera_dropall.sql" 
Now check the object in scott schemas

SQL> select * from tab;
TNAME                                                       TABTYPE           CLUSTERID
------------------------------                               ------------             --------------
BIN$5JksbkFeSai/0JTwbJOenQ==$0           TABLE
BIN$KtthiEIaRZmkv/5+FoYu5A==$0              TABLE
BIN$L/qcqzTxTsm8XHkDrfANOg==$0          TABLE
BIN$opUCTunxRf+0AUbhOzzBgw==$0        TABLE

The SQL was written against Oracle (hence the "purge recyclebin" at the bottom and the exclusion of objects already in the recycle bin from the "drop" loop).

Step  3  : Purge the recyclebin objects.

SQL>purge recyclebin ;
Recyclebin purged.

This will produce a list of drop statements. Not all of them will execute - if we drop with cascade, dropping the PK_* indices will fail. But in the end, you will have a pretty clean schema. Confirm with: 

SQL> select * from user_objects;
no rows selected


Enjoy    :-) 


Wednesday, September 14, 2011

ORA-01555: Snapshot Too Old

UNDO is  the  backbone  of  the  READ CONSISTENCY  mechanism  provided  by  Oracle. Multi-User Data  Concurrency  and  Read  Consistency  mechanism  make  Oracle  stand  tall  in  Relational  Database Management  Systems  (RDBMS) world .

Best  of  all,  automatic  undo  management  allows  the  DBA  to  specify  how  long  undo  information should  be  retained  after  commit, preventing  “snapshot too old”   errors  on  long  running  queries. This  is  done  by  setting  the UNDO_RETENTION  parameter.  The   default  is  900  seconds (5 minutes), and  we  can set  this  parameter  to  guarantee  that  Oracle  keeps undo  logs  for  extended  periods  of time. The  flashback  query  can  go  upto  the  point  of  time specified as a value in  the UNDO_RETENTION parameter.

Why ORA-01555 error occur

1.) Oracles  does  this  by reading  the  "before image"  of  changed  rows  from  the  online  undo  segments.  If  we  have  lots  of  updates, long running SQL  i.e , rollback records  needed  by  a  reader  fo  consistent read are overwritten by other writers.

2.) It may also due small size of undo and small undo_retention period .

To  solve  this  issues  we need  to  increase the  undo  tablepsace  and  undo  retention  period. Now  the issue  is  how  much  should  be  the  optimal value of  undo  retention and undo tablespace. For this we use the advisor. By using OEM, it is quite easy to estimate the size and time duration of undo.

Calculate Optimal Undo_Retention  :
The following query will help us to optimize the UNDO_RETENTION parameter :


Optimal Undo Retention = Actual Undo Size / (DB_BLOCK_SIZE × UNDO_BLOCK_REP_SEC)

To calculate Actual Undo Size :

SQL> SELECT SUM(a.bytes)/1024/1024 "UNDO_SIZE_MB"
            FROM v$datafile a, v$tablespace b,dba_tablespaces c
           WHERE c.contents = 'UNDO'
           AND c.status = 'ONLINE'
           AND b.name = c.tablespace_name
           AND a.ts# = b.ts#;

Undo Blocks per Second : 

SQL> SELECT MAX(undoblks/((end_time-begin_time)*3600*24)) "UNDO_BLOCK_PER_SEC"
FROM v$undostat ;

DB Block Size  :

SQL> SELECT TO_NUMBER(value) "DB_BLOCK_SIZE [Byte]"   FROM   v$parameter
  WHERE name = 'db_block_size';

We can do all in one query as

SQL> SELECT d.undo_size/(1024*1024) “ACT_UNDO_SIZE [MB]“, 
              SUBSTR(e.value,1,25) “ UNDO_RTN [Sec] “, 
              ROUND((d.undo_size / (to_number(f.value) * 
              g.undo_block_per_sec))) “OPT_UNDO_RET[Sec]” 
       FROM ( 
                       SELECT SUM(a.bytes) undo_size 
                        FROM v$datafile a, 
                         v$tablespace b, 
                        dba_tablespaces c 
                        WHERE c.contents = 'UNDO' 
                       AND c.status = 'ONLINE' 
                       AND b.name = c.tablespace_name 
                       AND a.ts# = b.ts# 
                       ) d, 
                            v$parameter e, 
                             v$parameter f, 

       SELECT MAX(undoblks/((end_time-begin_time)*3600*24)) 
             undo_block_per_sec 
         FROM v$undostat 
       ) g 
              WHERE e.name = 'undo_retention' 
              AND f.name = 'db_block_size'
/
ACT_UNDO_SIZE [MB]         UNDO_RTN [Sec]              OPT_UNDO_RET[Sec]

------------------------             ----------------------            ----------------------------
                      50                                  900                                              24000

Calculate Needed UNDO Size :
If  we  are  not  limited  by  disk  space, then  it would  be  better  to  choose  the  UNDO_RETENTION time that is best for us (for FLASHBACK, etc.). Allocate the appropriate size to the UNDO tablespace according to the database activity :                                                                                                                                                              

Formula :Undo Size = Optimal Undo Retention × DB_BLOCK_SIZE × UNDO_BLOCK_PER_SEC



Here again we can find in a single :

SQL> SELECT d.undo_size/(1024*1024) "ACTUAL UNDO SIZE [MByte]",
       SUBSTR(e.value,1,25) "UNDO RETENTION [Sec]",
       (TO_NUMBER(e.value) * TO_NUMBER(f.value) *
       g.undo_block_per_sec) / (1024*1024)
      "NEEDED UNDO SIZE [MByte]"
  FROM (
       SELECT SUM(a.bytes) undo_size
         FROM v$datafile a,
              v$tablespace b,
              dba_tablespaces c
        WHERE c.contents = 'UNDO'
          AND c.status = 'ONLINE'
          AND b.name = c.tablespace_name
          AND a.ts# = b.ts#
       ) d,
      v$parameter e,
       v$parameter f,
       (
       SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
         undo_block_per_sec
         FROM v$undostat
       ) g
 WHERE e.name = 'undo_retention'
  AND f.name = 'db_block_size'
/

We can avoid ORA-01555  error as follows :

1.) Do not run discrete transactions while sensitive queries or transactions are running, unless we are confident that the data sets required are mutually exclusive.

2.) Schedule long running queries and transactions out of hours, so that the consistent gets will not need to rollback changes made since the snapshot SCN. This also reduces the work done by the server, and thus improves performance.

3.) Code long running processes as a series of restartable steps.

4.) Shrink all rollback segments back to their optimal size manually before running a sensitive query or transaction to reduce risk of consistent get rollback failure due to extent deallocation.

5.) Use a large optimal value on all rollback segments, to delay extent reuse.

6.) Don't fetch across commits. That is, don't fetch on a cursor that was opened prior to the last commit, particularly if the data queried by the cursor is being changed in the current session.

7.) Commit less often in tasks that will run at the same time as the sensitive query, particularly in PL/SQL procedures, to reduce transaction slot reuse.


Enjoy       :-)