Monday, April 25, 2011

Automated Checkpoint Tuning (MTTR)


Determining the time to recover from an instance failure is a necessary component for reaching required service levelsagreements. For example, if service levels dictate that when a node fails, instance recovery time can be no more than 3 minutes, FAST_START_MTTR_TARGET should be set to 180.

Fast-start checkpointing refers to the periodic writes by the database writer (DBWn) processes for the purpose of writing changed data blocks from the Oracle buffer cache to disk and advancing the thread-checkpoint. Setting the database parameter FAST_START_MTTR_TARGET to a value greater than zero enables the fast-start checkpointing feature.

Fast-start checkpointing should always be enabled for the following reasons:

It reduces the time required for cache recovery, and makes instance recovery time-bounded and predictable. This is accomplished by limiting the number of dirty buffers (data blocks which have changes in memory that still need to be written to disk) and the number of redo records (changes in the database) generated between the most recent redo record and the last checkpoint.

Fast-Start checkpointing eliminates bulk writes and corresponding I/O spikes that occure traditionally with interval- based checkpoints, providing a smoother, more consistent I/O pattern that is more predictable and easier to manage. If the system is not already near or at its maximum I/O capacity, fast-start checkpointing will have a negligible impact on performance. Although fast-start checkpointing results in increased write activity, there is little reduction in database throughout, provided the system has sufficient I/O capacity.

Check-Pointing  :   Check-pointing is an important Oracle activity which records the highest system change number (SCN,) so that all data blocks less than or equal to the SCN are known to be written out to the data files. If there is a failure and then subsequent cache recovery, only the redo records containing changes at SCN(s) higher than the checkpoint need to be applied during recovery.

As we are aware, instance and crash recovery occur in two steps - cache recovery followed by transaction recovery. During the cache recovery phase, also known as the rolling forward stage, Oracle applies all committed and uncommitted changes in the redo log files to the affected data blocks. The work required for cache recovery processing is proportional to the rate of change to the database and the time between checkpoints.
Mean time to recover (MTTR)  :  Fast-start recovery can greatly reduce the mean time to recover (MTTR), with minimal effects on online application performance. Oracle continuously estimates the recovery time and automatically adjusts the check-pointing rate to meet the target recovery time.

With 10g, the Oracle database can now self-tune check-pointing to achieve good recovery times with low impact on normal throughput. We no longer have to set any checkpoint-related parameters.

This method reduces the time required for cache recovery and makes the recovery bounded and predictable by limiting the number of dirty buffers and the number of redo records generated between the most recent redo record and the last checkpoint. Administrators specify a target (bounded) time to complete the cache recovery phase of recovery with the FAST_START_MTTR_TARGET initialization parameter, and Oracle automatically varies the incremental checkpoint writes to meet that target.

The TARGET_MTTR field of V$INSTANCE_RECOVERY contains the MTTR target in effect. The ESTIMATED_MTTR field of V$INSTANCE_RECOVERY contains the estimated MTTR should a crash happen right away.

Enable MTTR advisory  : 
Enabling MTTR Advisory Enabling MTTR Advisory involves setting two parameters:

STATISTICS_LEVEL = TYPICAL
FAST_START_MTTR_TARGET > 0

Estimate the value for FAST_START_MTTR_TARGET as follows:

SELECT      TARGET_MTTR,      ESTIMATED_MTTR,      CKPT_BLOCK_WRITES
  FROM     V$INSTANCE_RECOVERY;

TARGET_MTTR         ESTIMATED_MTTR          CKPT_BLOCK_WRITES
-----------                                 --------------                         -----------------
        214                                12                                      269880

FAST_START_MTTR_TARGET = 214;

Whenever you set FAST_START_MTTR_TARGET to a nonzero value, then set the following parameters to 0.
LOG_CHECKPOINT_TIMEOUT = 0
LOG_CHECKPOINT_INTERVAL = 0
FAST_START_IO_TARGET = 0

Disable MTTR advisory  : 

FAST_START_MTTR_TARGET = 0
LOG_CHECKPOINT_INTERVAL = 200000

Saturday, April 23, 2011

Oracle NLS_LANG Setting for Language/Territory/Character Set

Setting NLS_LANG tells Oracle what characterset the client is using so that  Oracle can do conversion if needed from client’s characterset to the database characterset and setting this parameter on the client does not change the client’s characterset. Setting Language and Territory in nls_lang has nothing to do with storing characters in database, it’s controlled by the characterset and of course if the database can store that characterset..Below is the syntax of setting   NLS_LANG .

NLS_LANG=<language>_<territory>.<character set>
Example:  NLS_LANG= BRAZILIAN PORTUGUESE_BRAZIL.WE8MSWIN1252

To  check session  NLS   session  parameters,( note this doesn’t return the characterset set by NLS_LANG)

SQL> select  *   from   nls_session_parameters ;


To find the NLS_LANG of the database one can run the following SQL:
SQL>select DECODE(parameter, 'NLS_CHARACTERSET', 'CHARACTER SET','NLS_LANGUAGE',    'LANGUAGE', 'NLS_TERRITORY', 'TERRITORY') name, value from v$nls_parameters WHERE parameter IN ( 'NLS_CHARACTERSET', 'NLS_LANGUAGE', 'NLS_TERRITORY');
Sample Output :
NAME                             VALUE
------------          -------------------
LANGUAGE                  AMERICAN
TERRITORY                 AMERICA
CHARACTER SET        WE8MSWIN1252


Setting  NLS_LANG for export/import  :  we encounter character set conversion problems during exporting or importing a database or table(s) then we should check the following information to confirm whether the export/import procedure was performed correctly . 

When exporting/importing one can minimize risk of losing data during import/export by setting NLS_LANG.


1.)  Before starting export set  NLS_LANG to be the same character set of the database being exported which means no conversion takes place, all the data will be stored in the export file as it was stored in the database.
2.) Before starting import set NLS_LANG to be the same value as the it was set during export which means no conversion will take place in the import session, but if the character set of the target database is different the data will automatically be converted when import inserts the data in the database.
3.)  Before starting export set NLS_LANG to be the same character set of the database being imported to which means conversion takes place at this step it will automatically convert during export.
4.) Before starting import set NLS_LANG to be the same value as the it was set during import which means no conversion will take place as it was already converted during export.
5.) Settings on the machine from which u are trying to take the import of the data.Even though the NLSCHAR AND NLS NCHA Settings on the source and destination databases are same unless the console from where u are trying to take export and import also should same other wise u will get all the junk characters .
6.)  Sometimes we make import  and  get  some special character  ( like ?,! )  that means, we need to go 'region settings' and change the location (say)  ”brazil"  and default language make as   “ brazil” . (if we are importing American to brazil).
7.) check  the  export  log file and see what is specified.




Enjoy          J J J



Friday, April 22, 2011

Who is using which UNDO or TEMP segment ?


Undo tablespace is common for all the users for an instance  while temporary tablespace are assigned to users or a single default temporary tablespace is common for all users . To determine determine who is using a particular UNDO or Rollback Segment, use the bwlow query to find it .

SQL> SELECT TO_CHAR(s.sid)||','||TO_CHAR(s.serial#) sid_serial, NVL(s.username, 'None') orauser,
          s.program,  r.name undoseg , t.used_ublk * TO_NUMBER(x.value)/1024||'K' "Undo"
          FROM sys.v_$rollname  r, sys.v_$session s, sys.v_$transaction t , sys.v_$parameter x
         WHERE s.taddr = t.addr AND  r.usn= t.xidusn(+)  AND  x.name  = 'db_block_size' ;
Output  :
SID_SERIAL    ORAUSER           PROGRAM                          UNDOSEG       Undo
--------------    ------------   ---------------------------------       -----------------   -------
260,7             SCOTT   sqlplus@localhost.localdomain       _SYSSMU4$     8K


To determine the user who is using a TEMP tablespace ,then fire the below query as :

SQL> SELECT b.tablespace, ROUND(((b.blocks*p.value)/1024/1024),2)||'M' "SIZE",
           a.sid||','||a.serial# SID_SERIAL , a.username, a.program 
           FROM sys.v_$session a, sys.v_$sort_usage b, sys.v_$parameter p
           WHERE p.name  = 'db_block_size'  AND a.saddr = b.session_addr
           ORDER BY b.tablespace, b.blocks; 
Output  :
TABLESPACE    SIZE    SID_SERIAL     USERNAME       PROGRAM
-----------------  -------  --------------      ----------------    --------------------------------
TEMP               24M       260,7                SCOTT        sqlplus@localhost.localdomain
                                     
 

Enjoy         :-) 



How full is the current redo log file?


Here is a query that can tell us how full the current redo log file is. This is useful  when we  need to predict when the next log file will be archived out.

SQL> SELECT le.leseq                      "Current log sequence No",
          100*cp.cpodr_bno/le.lesiz         "Percent Full",
           cp.cpodr_bno                            "Current Block No",
           le.lesiz                                       "Size of Log in Blocks"
           FROM   x$kcccp  cp,    x$kccle  le
           WHERE    le.leseq =CP.cpodr_seq
           AND  bitand(le.leflg,24) = 8 ;

Sample Output :
Current log sequence No      Percent Full        Current Block No         Size of Log in Blocks
-----------------------                -------------           -----------------             ---------------------
                  7                       18.1982422               18635                        102400

Enjoy   J J J


How to gather statistics on data dictionary objects in Oracle


Before Oracle database 10g ,Oracle   explicitly recommended   not  to   gather   statistics   on  data dictionary   objects . As   of   Oracle database 10g   Oracle   explicitly does   recommend to   gather statistics on   data    dictionary   objects. As   we   might   know, there   is an   automatically   created   SCHEDULER JOB    in every   10g database   which runs   every night and   checks    for object    which   have either no statistics   at   all   or   for   which   the statistics   have   become   STALE   (which  means stat    at   least 10%   of   the  values have changed).   This   job   is   call GATHER_STATS_JOB and    belongs   to   the autotask   job   class.   

It   uses   a   program   which   again   call   a   procedure   from   built    in package DBMS_STATS   which does   the   statistics   collection. This   feature   only   works   if   the   initialization   parameter   STATISTICS_LEVEL   is   set to TYPICAL at   least   (which is the DEFAULT in 10g) and  it utilizes the TABLE MONITORING feature . TABLE  MONITORING is   enabled for  all tables in 10g by DEFAULT.

One question may come in our mind that  “Does  this job also collect   statistics on the data dictionary objects as well?” The answer is  “YES, it does!” and here is the proof  for this . First let us check if dbms_stats.gather_database_stats collect statistics for the data dictionary:

SQL> select count(*) from tab$;

COUNT(*) 
--------------
1227
SQL> create table t2 (col1 number);
Table created.

SQL> select count(*) from tab$;
COUNT(*)      
---------------
 1228

SQL> select NUM_ROWS from dba_tables where table_name=’TAB$’;
NUM_ROWS
------------------
1213

SQL> exec dbms_stats.gather_database_stats;
PL/SQL procedure successfully completed.

SQL> select NUM_ROWS from dba_tables where table_name=’TAB$’;   
 NUM_ROWS
-------------------
1228             
IT DOES! – and now let’s see if the job does also: 

SQL> create table t3 (col1 number);
Table created.

SQL> create table t4 (col1 number);
Table created.

SQL> select NUM_ROWS from dba_tables where table_name=’TAB$’;
NUM_ROWS
--------------
1228

Now  gather_stats_job run manually from DATABASE CONTROL !!! 

SQL> select NUM_ROWS from dba_tables where table_name=’TAB$’;
NUM_ROWS
-----------------
1230

and IT ALSO DOES! 
Even  though  there were  not  even 0.1%  of  the values changed it did!  So when should we gather statistics for the data dictionary manually? Oracle recommends to collect them when a significant  number of changes  were applied  to the data  dictionary,   like  dropping  significant  numbers  of  part ions  and creating new ones dropping tables, indexes, creating new ones and so on. But this only if it  is a significant number of changes and we cannot wait for the next automatically scheduled job run.         


Enjoy    :-)