Monday, May 16, 2011

Cold Clonning Using controlfile backup in oracle 11g

Cloning is the ability to copy and restore a database image and to set up a database image as a new instance.The new instance can reside on the same system as the original database ,or on a different system.                               
                                                                                                                                                                                 
There are two method of Cloning :
1.) Cold Cloning : Cold cloning doesnot required recovery because the source database is shut-down normally before the image is created ,or on a different system .                                                                             
2.) Hot Cloning : Hot  Cloning  doesnot  include the database . The database is recovered  from the hot backup of the database, backup ,controlfiles and archivelogs

Reason for Cloning :
In every oracle development and production environment there will become the need to transport the entire database from one physical machine to another.   This copy may be used for development, production testing, beta testing, etc, but rest assured that this need will arise and management will ask  us  to perform this task quickly.  Listed below are the most typical uses :
There are various reasons for cloning an Oracle Applications system such as :

  • Creating a copy of the production system for testing updates.
  • Migrating an existing system to new hardware.
  • Creating a stage area to reduce patching downtime.
  • Relocating an Oracle database to another machine.
  • Renaming Oracle database. 
Terms used to describe the method

Production  Database          ===>> "Noida"
Database to be clonned      ===>>  "delhi"
Platform Used                     ===>>   Oracle 11GR1

Here is step by step method of Clonning    
                                                                                                                   
Step 1 : Create directory structure for clone database : 

Create directory structure for oracle database files.  In my case the all datafiles,controlfiles and redologs will be store in "D:\oracle\oradata\delhi" .  So make folder name "delhi" inside oradata folder similary in admin folder make new folder as "delhi" and inside that make new folder as adump,pfile,dpdump respectively.

Step 2 : Create pfile for clone database :

C:\>sqlplus sys/xxxx@noida as sysdba
SQL> create  pfile='C:\initdelhi.ora'  from  spfile ;
File created.

Pfile of the  ”noida”  database is : 
noida.__db_cache_size=109051904
noida.__java_pool_size=12582912
noida.__large_pool_size=4194304
noida.__oracle_base='D:\oracle'#ORACLE_BASE set from environment
noida.__pga_aggregate_target=104857600
noida.__sga_target=322961408
noida.__shared_io_pool_size=0
noida.__shared_pool_size=188743680
noida.__streams_pool_size=0
*.audit_file_dest='D:\oracle\admin\noida\adump'
*.audit_trail='db'
*.compatible='11.1.0.0.0'
*.control_files='D:\ORACLE\ORADATA\NOIDA\CONTROL01.CTL','D:\ORACLE\ORADATA\NOIDA\CONTROL02.CTL','D:\ORACLE\ORADATA\NOIDA\CONTROL03.CTL'#Restore Controlfile
*.db_block_size=8192
*.db_domain=''
*.db_name='noida'
*.db_recovery_file_dest_size=2147483648
*.db_recovery_file_dest='D:\oracle\flash_recovery_area'
*.diagnostic_dest='D:\oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=noidaXDB)'
*.log_archive_dest=''
*.log_archive_dest_1='location=D:\archive\'
*.log_archive_format='noida_%s_%t_%r.arc'
*.memory_target=425721856
*.open_cursors=300
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS2'

Replace  the text  “noida”  with  “delhi “  and save it . Hence we have the pfile  for clone database.

delhi.__db_cache_size=109051904
delhi.__java_pool_size=12582912
delhi.__large_pool_size=4194304
delhi.__oracle_base='D:\oracle'#ORACLE_BASE set from environment
delhi.__pga_aggregate_target=104857600
delhi.__sga_target=322961408
delhi.__shared_io_pool_size=0
delhi.__shared_pool_size=188743680
delhi.__streams_pool_size=0
*.audit_file_dest='D:\oracle\admin\delhi\adump'
*.audit_trail='db'
*.compatible='11.1.0.0.0'
*.control_files='D:\ORACLE\ORADATA\delhi\CONTROL01.CTL','D:\ORACLE\ORADATA\delhi\CONTROL02.CTL','D:\ORACLE\ORADATA\delhi\CONTROL03.CTL'#Restore Controlfile
*.db_block_size=8192
*.db_domain=''
*.db_name='delhi'
*.db_recovery_file_dest_size=2147483648
*.db_recovery_file_dest='D:\oracle\flash_recovery_area'
*.diagnostic_dest='D:\oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=delhiXDB)'
*.log_archive_dest=''
*.log_archive_dest_1='location=D:\archive\'
*.log_archive_format='delhi_%s_%t_%r.arc'
*.memory_target=425721856
*.open_cursors=300
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS2'

Step 3 : Configure Listener and Services for Clone Database 
Configure listener by using  netmgr  and  configure  tns  by using  netca .Reload the listener and check it status.
C:\>lsnrctl
LSNRCTL for 32-bit Windows: Version 11.1.0.6.0 - Production on 13-MAY-2011 13:14:31
Copyright (c) 1991, 2007, Oracle.  All rights reserved.
Welcome to LSNRCTL, type "help" for information.
LSNRCTL> reload
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=xxxx)(PORT=1521)))
The command completed successfully
LSNRCTL> status
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=xxxx)(PORT=1521)))
STATUS of the LISTENER
---------------------------------------------
Alias                     LISTENER
Version                   TNSLSNR for 32-bit Windows: Version 11.1.0.6.0 - Production
Start Date                13-MAY-2011 11:12:13
Uptime                    0 days 2 hr. 2 min. 36 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   D:\oracle\product\11.1.0\db_1\network\admin\listener.ora
Listener Log File         d:\oracle\diag\tnslsnr\ramtech-199\listener\alert\log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=xxxx)(PORT=1521)))
Services Summary...
Service "delhi" has 1 instance(s).
  Instance "delhi", status UNKNOWN, has 1 handler(s) for this service...
Service "noida" has 1 instance(s).
  Instance "noida", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
LSNRCTL> exit

Check for TNS

C:\> tnsping delhi
TNS Ping Utility for 32-bit Windows: Version 11.1.0.6.0 - Production on 13-MAY-2011 13:17:06
Copyright (c) 1997, 2007, Oracle.  All rights reserved.
Used parameter files:
D:\oracle\product\11.1.0\db_1\network\admin\sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = xxxx)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = delhi)))
OK (50 msec)

Step 4 : Create  Instance for  Clone database 

C:\>oradim  -new  -sid  delhi  -intpwd  delhi  -startmode  m
Instance created.

Step 5 : Startup the clone database at nomount stage 
C:\>sqlplus sys/delhi@delhi as sysdba
SQL*Plus: Release 11.1.0.6.0 - Production on Fri May 13 13:23:11 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
Connected to an idle instance.
SQL> create spfile from pfile='C:\initdelhi.ora';
File created.
SQL> startup nomount
ORACLE instance started.
Total System Global Area  426852352 bytes
Fixed Size                  1333648 bytes
Variable Size             310380144 bytes
Database Buffers          109051904 bytes
Redo Buffers                6086656 bytes

Step  6 : Create Script for Controlfile
At the production  database  :
SQL> alter database backup controlfile to trace;
Database altered.

Now check the alert log file and find the name of the .trc file where the backup of controlfile  is. The following information are inside the .trc file.In my case the trace file contains following information.       
     
Trace file d:\oracle\diag\rdbms\noida\noida\trace\noida_ora_3952.trc
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Windows XP Version V5.1 Service Pack 2
CPU                 : 2 - type 586
Process Affinity    : 0x00000000
Memory (Avail/Total): Ph:52M/1015M, Ph+PgF:3270M/5518M, VA:1414M/2047M
Instance name: noida
Redo thread mounted by this instance: 1
Oracle process number: 18
Windows thread id: 3952, image: ORACLE.EXE (SHAD)
*** 2011-05-13 13:28:03.750
*** SESSION ID:(170.5) 2011-05-13 13:28:03.750
*** CLIENT ID:() 2011-05-13 13:28:03.750
*** SERVICE NAME:() 2011-05-13 13:28:03.750
*** MODULE NAME:(sqlplus.exe) 2011-05-13 13:28:03.750
*** ACTION NAME:() 2011-05-13 13:28:03.750
 Successfully allocated 2 recovery slaves
*** 2011-05-13 13:28:04.078
Using 545 overflow buffers per recovery slave
Thread 1 checkpoint: logseq 11, block 2, scn 1582181
  cache-low rba: logseq 11, block 87608
    on-disk rba: logseq 11, block 89694, scn 1628819
  start recovery at logseq 11, block 87608, scn 0
==== Redo read statistics for thread 1 ====
Total physical reads (from disk and memory): 4096Kb
-- Redo read_disk statistics --
Read rate (ASYNC): 1043Kb in 0.72s => 1.41 Mb/sec
Longest record: 13Kb, moves: 0/1112 (0%)
Change moves: 2/51 (3%), moved: 0Mb
Longest LWN: 404Kb, moves: 0/60 (0%), moved: 0Mb
Last redo scn: 0x0000.0018da92 (1628818)
----------------------------------------------
*** 2011-05-13 13:28:05.593
----- Recovery Hash Table Statistics ---------
Hash table buckets = 32768
Longest hash chain = 2
Average hash chain = 414/413 = 1.0
Max compares per lookup = 1
Avg compares per lookup = 1510/1998 = 0.8
---------------------------------------------
*** 2011-05-13 13:28:05.593
KCRA: start recovery claims for 414 data blocks
*** 2011-05-13 13:28:05.609
KCRA: blocks processed = 414/414, claimed = 414, eliminated = 0
*** 2011-05-13 13:28:07.281
Recovery of Online Redo Log: Thread 1 Group 2 Seq 11 Reading mem 0
*** 2011-05-13 13:28:07.703
Completed redo application
*** 2011-05-13 13:28:08.750
Completed recovery checkpoint
IR RIA: redo_size 1068032 bytes, time_taken 193 ms
*** 2011-05-13 13:28:09.406
----- Recovery Hash Table Statistics ---------
Hash table buckets = 32768
Longest hash chain = 2
Average hash chain = 414/413 = 1.0
Max compares per lookup = 2
Avg compares per lookup = 1739/1923 = 0.9
----------------------------------------------
*** 2011-05-13 13:28:26.921
kwqmnich: current time::  7: 58: 26
kwqmnich: instance no 0 check_only flag 1
*** 2011-05-13 13:28:27.250
kwqmnich: initialized job cache structure
*** MODULE NAME:(Oracle Enterprise Manager.pin EM plsql) 2011-05-13 13:29:07.781
*** ACTION NAME:(start) 2011-05-13 13:29:07.781
 *** 2011-05-13 13:29:07.781
-- The following are current System-scope REDO Log Archival related
-- parameters and can be included in the database initialization file.
-- LOG_ARCHIVE_DEST=''
-- LOG_ARCHIVE_DUPLEX_DEST=''
-- LOG_ARCHIVE_FORMAT=noida_%s_%t_%r.arc
-- DB_UNIQUE_NAME="noida"
-- LOG_ARCHIVE_CONFIG='SEND, RECEIVE, NODG_CONFIG'
-- LOG_ARCHIVE_MAX_PROCESSES=4
-- STANDBY_FILE_MANAGEMENT=MANUAL
-- STANDBY_ARCHIVE_DEST=%ORACLE_HOME%\RDBMS
-- FAL_CLIENT=''
-- FAL_SERVER=''
-- LOG_ARCHIVE_DEST_1='LOCATION=D:\archive\'
-- LOG_ARCHIVE_DEST_1='OPTIONAL REOPEN=300 NODELAY'
-- LOG_ARCHIVE_DEST_1='ARCH NOAFFIRM NOEXPEDITE NOVERIFY SYNC'
-- LOG_ARCHIVE_DEST_1='REGISTER NOALTERNATE NODEPENDENCY'
-- LOG_ARCHIVE_DEST_1='NOMAX_FAILURE NOQUOTA_SIZE NOQUOTA_USED NODB_UNIQUE_NAME'
-- LOG_ARCHIVE_DEST_1='VALID_FOR=(PRIMARY_ROLE,ONLINE_LOGFILES)'
-- LOG_ARCHIVE_DEST_STATE_1=ENABLE
-- Below are two sets of SQL statements, each of which creates a new
-- control file and uses it to open the database. The first set opens
-- the database with the NORESETLOGS option and should be used only if
-- the current versions of all online logs are available. The second
-- set opens the database with the RESETLOGS option and should be used
-- if online logs are unavailable.
-- The appropriate set of statements can be copied from the trace into
-- a script file, edited as necessary, and executed when there is a
-- need to re-create the control file.
--     Set #1. NORESETLOGS case
-- The following commands will create a new control file and use it
-- to open the database.
-- Data used by Recovery Manager will be lost.
-- Additional logs may be required for media recovery of offline
-- Use this only if the current versions of all online logs are
-- available.
-- After mounting the created controlfile, the following SQL
-- statement will place the database in the appropriate
-- protection mode:
--  ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "NOIDA" NORESETLOGS  ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
LOGFILE
  GROUP 1 'D:\ORACLE\ORADATA\NOIDA\REDO01.LOG'  SIZE 50M,
  GROUP 2 'D:\ORACLE\ORADATA\NOIDA\REDO02.LOG'  SIZE 50M,
  GROUP 3 'D:\ORACLE\ORADATA\NOIDA\REDO03.LOG'  SIZE 50M
-- STANDBY LOGFILE
DATAFILE
  'D:\ORACLE\ORADATA\NOIDA\SYSTEM01.DBF',
  'D:\ORACLE\ORADATA\NOIDA\SYSAUX01.DBF',
  'D:\ORACLE\ORADATA\NOIDA\USERS01.DBF',
  'D:\ORACLE\ORADATA\NOIDA\EXAMPLE01.DBF',
  'D:\ORACLE\ORADATA\NOIDA\UNDOTBS02.DBF'
CHARACTER SET WE8MSWIN1252
;
-- Configure RMAN configuration record 1
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO RECOVERY WINDOW OF 2 DAYS');
-- Configure RMAN configuration record 2
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('BACKUP OPTIMIZATION','ON');
-- Configure RMAN configuration record 3
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CONTROLFILE AUTOBACKUP','ON');
-- Configure RMAN configuration record 4
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE','DISK TO ''D:\rman_bkp\cf\%F''');
-- Configure RMAN configuration record 5
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('ENCRYPTION FOR DATABASE','OFF');
-- Configure RMAN configuration record 6
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CHANNEL','DEVICE TYPE DISK FORMAT   ''D:\rman_bkp\%U''');
-- Commands to re-create incarnation table
-- Below log names MUST be changed to existing filenames on
-- disk. Any one log file from each branch can be used to
-- re-create incarnation records.
-- ALTER DATABASE REGISTER LOGFILE 'D:\ARCHIVE\NOIDA_1_1_636026939.ARC';
-- ALTER DATABASE REGISTER LOGFILE 'D:\ARCHIVE\NOIDA_1_1_749730106.ARC';
-- ALTER DATABASE REGISTER LOGFILE 'D:\ARCHIVE\NOIDA_1_1_750184743.ARC';
-- Recovery is required if any of the datafiles are restored backups,
-- or if the last shutdown was not normal or immediate.
RECOVER DATABASE
-- All logs need archiving and a log switch is needed.
ALTER SYSTEM ARCHIVE LOG ALL;
-- Database can now be opened normally.
ALTER DATABASE OPEN;
-- Commands to add tempfiles to temporary tablespaces.
-- Online tempfiles have complete space information.
-- Other tempfiles may require adjustment.
ALTER TABLESPACE TEMP ADD TEMPFILE 'D:\ORACLE\ORADATA\NOIDA\TEMP01.DBF'
     SIZE 20971520  REUSE AUTOEXTEND ON NEXT 655360  MAXSIZE 32767M;
-- End of tempfile additions.
--     Set #2. RESETLOGS case
-- The following commands will create a new control file and use it
-- to open the database.
-- Data used by Recovery Manager will be lost.
-- The contents of online logs will be lost and all backups will
-- be invalidated. Use this only if online logs are damaged.
-- After mounting the created controlfile, the following SQL
-- statement will place the database in the appropriate
-- protection mode:
--  ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "NOIDA" RESETLOGS  ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
LOGFILE
  GROUP 1 'D:\ORACLE\ORADATA\NOIDA\REDO01.LOG'  SIZE 50M,
  GROUP 2 'D:\ORACLE\ORADATA\NOIDA\REDO02.LOG'  SIZE 50M,
  GROUP 3 'D:\ORACLE\ORADATA\NOIDA\REDO03.LOG'  SIZE 50M
-- STANDBY LOGFILE
DATAFILE
  'D:\ORACLE\ORADATA\NOIDA\SYSTEM01.DBF',
  'D:\ORACLE\ORADATA\NOIDA\SYSAUX01.DBF',
  'D:\ORACLE\ORADATA\NOIDA\USERS01.DBF',
  'D:\ORACLE\ORADATA\NOIDA\EXAMPLE01.DBF',
  'D:\ORACLE\ORADATA\NOIDA\UNDOTBS02.DBF'
CHARACTER SET WE8MSWIN1252
;
-- Configure RMAN configuration record 1
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO RECOVERY WINDOW OF 2 DAYS');
-- Configure RMAN configuration record 2
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('BACKUP OPTIMIZATION','ON');
-- Configure RMAN configuration record 3
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CONTROLFILE AUTOBACKUP','ON');
-- Configure RMAN configuration record 4
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE','DISK TO ''D:\rman_bkp\cf\%F''');
-- Configure RMAN configuration record 5
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('ENCRYPTION FOR DATABASE','OFF');
-- Configure RMAN configuration record 6
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CHANNEL','DEVICE TYPE DISK FORMAT   ''D:\rman_bkp\%U''');
-- Commands to re-create incarnation table
-- Below log names MUST be changed to existing filenames on
-- disk. Any one log file from each branch can be used to
-- re-create incarnation records.
-- ALTER DATABASE REGISTER LOGFILE 'D:\ARCHIVE\NOIDA_1_1_636026939.ARC';
-- ALTER DATABASE REGISTER LOGFILE 'D:\ARCHIVE\NOIDA_1_1_749730106.ARC';
-- ALTER DATABASE REGISTER LOGFILE 'D:\ARCHIVE\NOIDA_1_1_750184743.ARC';
-- Recovery is required if any of the datafiles are restored backups,
-- or if the last shutdown was not normal or immediate.
RECOVER DATABASE USING BACKUP CONTROLFILE
-- Database can now be opened zeroing the online logs.
ALTER DATABASE OPEN RESETLOGS;
-- Commands to add tempfiles to temporary tablespaces.
-- Online tempfiles have complete space information.
-- Other tempfiles may require adjustment.
ALTER TABLESPACE TEMP ADD TEMPFILE 'D:\ORACLE\ORADATA\NOIDA\TEMP01.DBF'
     SIZE 20971520  REUSE AUTOEXTEND ON NEXT 655360  MAXSIZE 32767M;
-- End of tempfile additions.

Edit the above the file  and make the following  changes  i.e  replace  “reuse”  with  “set”   and change the database  name from “noida”   to “delhi”  .  After  editing it looks like as :

CREATE CONTROLFILE  SET  DATABASE "DELHI"  RESETLOGS  ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
LOGFILE
  GROUP 1 'D:\ORACLE\ORADATA\NOIDA\REDO01.LOG'  SIZE 50M,
  GROUP 2 'D:\ORACLE\ORADATA\NOIDA\REDO02.LOG'  SIZE 50M,
  GROUP 3 'D:\ORACLE\ORADATA\NOIDA\REDO03.LOG'  SIZE 50M
DATAFILE
  'D:\ORACLE\ORADATA\NOIDA\SYSTEM01.DBF',
  'D:\ORACLE\ORADATA\NOIDA\SYSAUX01.DBF',
  'D:\ORACLE\ORADATA\NOIDA\USERS01.DBF',
  'D:\ORACLE\ORADATA\NOIDA\EXAMPLE01.DBF',
  'D:\ORACLE\ORADATA\NOIDA\UNDOTBS02.DBF'
CHARACTER SET WE8MSWIN1252;

Save the above editted file as create_control.sql  .

Step 7 : Restore the datafiles :

Shut down the production database i.e,  “noida”   and copy all the datafile from  production to  clone database . In my case , I  have  copy my all datafile from  ‘D:\oracle\oradata\noida\’    to   ‘D:\oracle\oradata\delhi\’   .                                                                                                                           

Step 8 : Execute the control file script : 

Since  clone database i.e, delhi  is in nomount stage so execute the create_control.sql scripts.
SQL> @C:\create_control.sql
Control file created.
Hence  controlfile   is  created, and the database is in mount stage.

Step 9 : Finally open the clone database with resetlogs option : 

SQL> alter database open resetlogs;
Database altered. 

SQL> select name,open_mode from v$database;
NAME                   OPEN_MODE
--------                  ------------------
DELHI                    READ WRITE


Enjoy      J J J


Tuesday, May 10, 2011

Set Newname Flexibility in 11gR2

There   are  some  new  feature  added to  "set newname" command   to  make  it  more  flexible  in  11gR2 .Suppose  we  are restoring   datafiles  from  the  backup,  either on  the  same  server  or  a  different one such  as  staging.  If  the  filesystem  (or diskgroup)  names  are  identifical,  we  won’t have to change anything. But that  is  hardly  ever  the  case.  In  staging  the f ilesystems  may  be  different,  or  perhaps  we  are   restoring a   production  database  to  an  ASM   diskgroup  different  from  where  it  was  originally created.  In  that  case  we  have  to  let  RMAN  know  the  new  name of  the datafile. The  way  to  do  it  is using the SET NEWNAME command. 

Here is an example, we weill restored datafile 1,2  in ‘D’ drive which previous location was in ‘C’ drive . Here is  the rman set newname command .

RMAN> run {
                     set newname for datafile 1 to ‘D:\oradata\system_01.dbf’;
                     set newname for datafile 2 to ‘D:\oradata\sysaux_01.dbf’;
                      restore database;     
               }

Here  there  are  just  two  datafiles, but  what  if  we  have hundreds  or  even thousands ? It will not only be a herculean task to enter all that information but it will be error-prone as well. Instead of entering each datafile by name, now we can use a single set newname clause for a tablespace. Here is how we  can do it:

RMAN> run  {
                         set  newname  for  tablespace  examples  to  'D:\oradata\examples%b.dbf'; 
                        … 
                        … rest of the commands come here … 
              }

If the tablespace has more than one datafile, they will all be uniquely created. We can use this clause for the entire database as well:  
RMAN >  run 
                 {  
                     set newname for database to 'D:\oradata\%b' ; 
                 } 

The   term  %b  specifies  the  base  filename  without  the path,  e.g. ‘C:\oradata\file1.dbf  will be reconsidered  as  file1.dbf  in  %b. This  is  very useful  for  cases  where  we are moving the files to a different  directory. We  can also  use it  for creating image copies where we will create the backup in  a different  ocation  with  the same names as  the  parent  file  which  will  make  it easy  for identification.  Here are some more examples of the placeholders.

%f      ==     is the absolute file number 
%U    ==     is a system generated unique name similar to the  %U in backup formats
%I     ==     is the Database ID
%N   ==     is the tablespace name

Using these placeholders we can use just one SET NEWNAME command for the entire database – making the process not only easy but more accurate as well.                                                                                       



Enjoy        : -) 



How to Drop UNDO Tablespace


It is not an easy task to drop the undo tablespace . Once I have to delete the undo tablespace due to some reason and i  find that it is not straight forward to delete the undo tablespace . I got the following error while dropping the error :

SQL> select tablespace_name,file_name from dba_data_files;

TABLESPACE_NAME                FILE_NAME
------------------------------ ---------------------------------------------------------------------
USERS                               D:\ORACLE\ORADATA\NOIDA\USERS01.DBF
UNDOTBS1                       D:\ORACLE\ORADATA\NOIDA\UNDOTBS01.DBF
SYSAUX                             D:\ORACLE\ORADATA\NOIDA\SYSAUX01.DBF
SYSTEM                            D:\ORACLE\ORADATA\NOIDA\SYSTEM01.DBF
EXAMPLE                          D:\ORACLE\ORADATA\NOIDA\EXAMPLE01.DBF

SQL> drop tablespace undotbs1;
drop tablespace undotbs1
*
ERROR at line 1:
ORA-30013: undo tablespace 'UNDOTBS1' is currently in use

As the error indicate that the undo tablespace is in use so i issue the following command.

SQL> alter tablespace undotbs1  offline;
alter tablespace undotbs1  offline
*
ERROR at line 1:
ORA-30042: Cannot offline the undo tablespace.

Therefore, to drop undo  tablespace, we have to perform following steps:

 1.) Create new undo tablespace
 2.) Make it defalut tablepsace and undo management manual by editting parameter file and restart it.
 3.) Check the all segment of old undo tablespace to be offline.
 4.) Drop the old tablespace.
 5.) Change undo management to auto by editting parameter file and restart the database

Step 1 : Create Tablespace   :  Create undo tablespace undotbs2    

SQL> create undo tablespace UNDOTBS2 datafile  'D:\ORACLE\ORADATA\NOIDA\UNDOTBS02.DBF'  size 100M;
Tablespace created.

Step 2 : Edit the parameter file

SQL> alter system set undo_tablespace=UNDOTBS2 ;
System altered.

SQL> alter system set undo_management=MANUAL scope=spfile;
System altered.

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.
Total System Global Area  426852352 bytes
Fixed Size                  1333648 bytes
Variable Size             360711792 bytes
Database Buffers           58720256 bytes
Redo Buffers                6086656 bytes
Database mounted.
Database opened.
SQL> show parameter undo_tablespace
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_tablespace                      string      UNDOTBS2

Step 3: Check the all segment of old undo tablespace to be offline

SQL> select owner, segment_name, tablespace_name, status from dba_rollback_segs order by 3;

OWNER  SEGMENT_NAME                   TABLESPACE_NAME                STATUS
------ ------------------------------ ------------------------------ ----------------
SYS                 SYSTEM                                     SYSTEM                            ONLINE
PUBLIC       _SYSSMU10_1192467665$          UNDOTBS1                       OFFLINE
PUBLIC        _SYSSMU1_1192467665$           UNDOTBS1                       OFFLINE
PUBLIC        _SYSSMU2_1192467665$           UNDOTBS1                       OFFLINE
PUBLIC        _SYSSMU3_1192467665$           UNDOTBS1                       OFFLINE
PUBLIC        _SYSSMU4_1192467665$           UNDOTBS1                       OFFLINE
PUBLIC        _SYSSMU5_1192467665$           UNDOTBS1                       OFFLINE
PUBLIC        _SYSSMU6_1192467665$           UNDOTBS1                       OFFLINE
PUBLIC        _SYSSMU7_1192467665$           UNDOTBS1                       OFFLINE
PUBLIC        _SYSSMU8_1192467665$           UNDOTBS1                       OFFLINE
PUBLIC        _SYSSMU9_1192467665$           UNDOTBS1                       ONLINE
PUBLIC      _SYSSMU12_1304934663$          UNDOTBS2                        OFFLINE
PUBLIC      _SYSSMU13_1304934663$          UNDOTBS2                        OFFLINE
PUBLIC      _SYSSMU14_1304934663$          UNDOTBS2                        OFFLINE
PUBLIC      _SYSSMU15_1304934663$          UNDOTBS2                        OFFLINE
PUBLIC      _SYSSMU11_1304934663$          UNDOTBS2                        OFFLINE
PUBLIC      _SYSSMU17_1304934663$          UNDOTBS2                        OFFLINE
PUBLIC      _SYSSMU18_1304934663$          UNDOTBS2                        OFFLINE
PUBLIC      _SYSSMU19_1304934663$          UNDOTBS2                        OFFLINE
PUBLIC      _SYSSMU20_1304934663$          UNDOTBS2                        OFFLINE
PUBLIC      _SYSSMU16_1304934663$          UNDOTBS2                        OFFLINE

21 rows selected.

If any one the above segment is online then change it status to offline by using below command . 
SQL>alter rollback segment "_SYSSMU9_1192467665$" offline;

Step 4 : Drop old undo tablespace

SQL> drop tablespace UNDOTBS1 including contents and datafiles;
Tablespace dropped.

Step  5 : Change undo management to auto and restart the database

SQL> alter system set undo_management=auto scope=spfile;
System altered.

SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area  426852352 bytes
Fixed Size                  1333648 bytes
Variable Size             364906096 bytes
Database Buffers           54525952 bytes
Redo Buffers                6086656 bytes
Database mounted.
Database opened.

SQL> show parameter undo_tablespace
NAME                                       TYPE        VALUE
------------------------------------   ----------- ------------------------------
undo_tablespace                      string      UNDOTBS2


Enjoy      J J J


Exclude and Include parameter of Data Pump

The Exclude and Include feature of Data Pump can filter to load/unload certain objects .This so-called 'Metadata filtering' is implemented through the EXCLUDE and INCLUDE parameters.The INCLUDE and EXCLUDE parameters can be used to limit the export/import to specific objects. 

All object types for the given mode of export will be included except those specified in an EXCLUDE statement. If an object is excluded, all of its dependent objects are also excluded. For example, excluding a table will also exclude all indexes and triggers on the table.The name clause must be separated from the object type  with a colon and enclosed in double quotation marks, because single-quotation marks are required to delimit the name strings Double quotes and single quotes usage. 

The EXCLUDE and INCLUDE parameters are mutually exclusive i.e, It is not possible to specify both the INCLUDE parameter and the EXCLUDE parameter in the same job.

The basic syntax for both parameters is the same.

INCLUDE=object_type[:name_clause] [, ...]
EXCLUDE=object_type[:name_clause] [, ...]

Few examples are :

EXCLUDE=SEQUENCE, TABLE:"IN ('EMP', 'DEPT')"
EXCLUDE=INDEX:"= 'MY_INDX'"
INCLUDE=PROCEDURE:"LIKE 'MY_PROC_%'"
INCLUDE=FUNCTION, PACKAGE, TABLE:"= 'EMP'" 
INCLUDE=PROCEDURE:\"=\'PROC1\'\",FUNCTION:\"=\'FUNC1\'\"

To run this job without a parameter file, we need to escape the special characters. Incorrect escaping can result in errors.
for example  :   INCLUDE=TABLE:\"IN \(\'EMP\', \'DEP\'\)\"

To determine which objects are dependent, e.g. for a TABLE, we can run the following query,

SQL> select  named, object_path, comments FROM database_export_objects WHERE object_path LIKE 'TABLE/%';

While Excluding Users specifying EXCLUDE=USER excludes only the definitions of users, not the objects contained within users' schemas. To exclude a specific user and all objects of that user, specify a filter such as the following (where SCOTT is the schema name of the user we want to exclude):
EXCLUDE=SCHEMA:\"='SCOTT'\"
So, we can export as also use below command to execute export .

C:\> expdp system/xxxx directory=datapump dumpfile=expdp_scott.dmp  logfile=expdp_scott.log schemas=scott  INCLUDE=TABLE:\"IN \(SELECT object_name FROM scott.expdp_table WHERE owner=\'SCOTT\' AND object_type=\'TABLE\'\)\"


Enjoy   J J J




Saturday, May 7, 2011

How Data Pump Export Parameters Map to Those of the Original Export Utility

Here are the mapping of Data Pump Export parameters to original Export parameters. In some cases, because of feature redesign, the original Export parameter is no longer needed, so there is no Data Pump parameter to compare it to. Also, some of the parameter names may be the same, but the functionality is slightly different. The parameter are as 


Original Export Parameter
Comparable Data Pump Export Parameter
BUFFER
A parameter comparable to BUFFER is not needed.
COMPRESS
A parameter comparable to COMPRESS is not needed.
CONSISTENT
A parameter comparable to CONSISTENT is not needed. Use FLASHBACK_SCN and FLASHBACK_TIME for this functionality.
CONSTRAINTS
EXCLUDE=CONSTRAINT
DIRECT
A parameter comparable to DIRECT is not needed. Data Pump Export automatically chooses the best method (direct path mode or external tables mode).
FEEDBACK
STATUS
FILE
DUMPFILE
FILESIZE
FILESIZE
FLASHBACK_SCN
FLASHBACK_SCN
FLASHBACK_TIME
FLASHBACK_TIME
FULL
FULL
GRANTS
EXCLUDE=GRANT
HELP
HELP
INDEXES
EXCLUDE=INDEX
LOG
LOGFILE
OBJECT_CONSISTENT
A parameter comparable to OBJECT_CONSISTENT is not needed.
OWNER
SCHEMAS
PARFILE
PARFILE
QUERY
QUERY
RECORDLENGTH
A parameter comparable to RECORDLENGTH is not needed because sizing is done automatically.
RESUMABLE
A parameter comparable to RESUMABLE is not needed. This functionality is automatically provided for users who have been granted the EXP_FULL_DATABASE role.
RESUMABLE_NAME
A parameter comparable to RESUMABLE_NAME is not needed. This functionality is automatically provided for users who have been granted the EXP_FULL_DATABASE role.
RESUMABLE_TIMEOUT
A parameter comparable to RESUMABLE_TIMEOUT is not needed. This functionality is automatically provided for users who have been granted the EXP_FULL_DATABASE role.
ROWS=N
CONTENT=METADATA_ONLY
ROWS=Y
CONTENT=ALL
STATISTICS
A parameter comparable to STATISTICS is not needed. Statistics are always saved for tables.
TABLES
TABLES
TABLESPACES
TABLESPACES (Same parameter; slightly different behavior)
TRANSPORT_TABLESPACE
TRANSPORT_TABLESPACES (Same parameter; slightly different behavior)
TRIGGERS
EXCLUDE=TRIGGER
TTS_FULL_CHECK
TRANSPORT_FULL_CHECK
USERID
A parameter comparable to USERID is not needed. This information is supplied as the username and password when you invoke Export.
VOLSIZE
A parameter comparable to VOLSIZE is not needed.


Enjoy  :-)