Friday, April 22, 2011

Cloning A Database On The Same Server Using Rman

A nice feature of RMAN is the ability to duplicate, or clone, a database from a previous backup. It is possible to create a duplicate database on a remote server with the same file structure, a remote server will a different file structure or the local server with a different file structure. In this article I'll demonstrate how to duplicate a database on the local server .This can prove useful when we want to recover selected objects from a backup, rather than roll back a whole database or tablespace.


Purpose of Database Duplication  :   The goal of database duplication is the creation of a duplicate database, which is a separate database that contains all or a subset of the data in the source database. A duplicate database is useful for a variety of purposes, most of which involve testing. We can perform the following tasks in a duplicate database:
1.) Test backup and recovery procedures .
2.) Test an upgrade to a new release of Oracle Database .
3.) Test the effect of applications on database performance .
4.) Generate reports .
5.) Export data such as a table that was inadvertently dropped from the production database, and then import it back into the production database
For example, we can duplicate the production database on host1 to host2, and then use the duplicate database on host2 to practice restoring and recovering this database while the production database on host1 operates as usual.

Here, we will follow Step-by-step to create the clone or duplicate database .


Terms used to describe the method step by step : 
Target: Database to be cloned = "noida"
Duplicate or clone  database  = "clone"
Here the source and target database instances are on a Windows server.

1.) Prepare Init.ora file for the duplicate instance
Create the pfile from spfile of the target database as below :
SQL> select name,open_mode from v$database;
NAME      OPEN_MODE
--------- ----------
NOIDA     READ WRITE
SQL> create pfile='c:\noidainit.ora' from spfile;
File created.

Now edit the noidainit file.


a.) Replace "noida" with  "clone"
b.) Add two below parameter as shown in clone parameter file.
DB_FILE_NAME_CONVERT
LOG_FILE_NAME_CONVERT 

a.) Below is the pfile of target database (noida)  :
noida.__db_cache_size=67108864
noida.__java_pool_size=12582912
noida.__large_pool_size=4194304
noida.__oracle_base='D:\oracle'#ORACLE_BASE set from environment
noida.__pga_aggregate_target=83886080
noida.__sga_target=234881024
noida.__shared_io_pool_size=0
noida.__shared_pool_size=130023424
noida.__streams_pool_size=4194304
*.audit_trail='db'
*.compatible='11.1.0.0.0'
*.control_files='D:\ORACLE\ORADATA\NOIDA\CONTROL01.CTL','D:\ORACLE\ORADATA\NOIDA\CONTROL02.CTL','D:\ORACLE\ORADATA\NOIDA\CONTROL03.CTL'#Restore Controlfile
*.db_block_size=8192
*.db_domain=''
*.db_name='noida'
*.db_recovery_file_dest='D:\oracle\flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.diagnostic_dest='D:\oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=noidaXDB)'
*.log_archive_dest_1='LOCATION=D:\archive\'
*.log_archive_format='ARC%S_%R.%T'
*.memory_target=315621376
*.open_cursors=300
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.undo_tablespace='UNDOTBS1'

Now replace "noida"  with "clone" and add two above parameter
clone.__db_cache_size=67108864
clone.__java_pool_size=12582912
clone.__large_pool_size=4194304
clone.__oracle_base='D:\oracle'#ORACLE_BASE set from environment
clone.__pga_aggregate_target=83886080
clone.__sga_target=234881024
clone.__shared_io_pool_size=0
clone.__shared_pool_size=130023424
clone.__streams_pool_size=4194304
*.audit_trail='db'
*.compatible='11.1.0.0.0'
*.control_files='D:\ORACLE\ORADATA\clone\CONTROL01.CTL','D:\ORACLE\ORADATA\clone\CONTROL02.CTL','D:\ORACLE\ORADATA\clone\CONTROL03.CTL'#Restore Controlfile
*.db_block_size=8192
*.db_domain=''
*.db_name='clone'
*.db_recovery_file_dest='D:\oracle\flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.diagnostic_dest='D:\oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=cloneXDB)'
*.log_archive_dest_1='LOCATION=D:\archive\'
*.log_archive_format='ARC%S_%R.%T'
*.memory_target=315621376
*.open_cursors=300
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.undo_tablespace='UNDOTBS1'
db_file_name_convert=('D:\oracle\oradata\noida\','D:\oracle\oradata\clone\')
log_file_name_convert=('D:\oracle\oradata\noida\','D:\oracle\oradata\clone\')


Step 2 : If using linux Platform  then create password file as below :
orapwd file=/u01/app/oracle/product/9.2.0.1.0/dbs/orapwDUP password=password entries=10


Step 3 : If using windows then Create Oracle related services (Required for Oracle on Windows only)
C:\> oradim -new -sid clone -intpwd clone -startmode m


Step 4 :  Create directories for database files
Create the required directories on the target server for datafiles, redo logs, control files, temporary files etc, this example assumes that all the database files will be stored under ‘D:\oracle\oradata\clone' and 'D:\oracle\admin\clone\'


Step 5 :  Configure listener and service name (i.e, tnsname.ora)
It is better to configure listener through net mgr or we can add the below details in listener.ora file
(SID_DESC =
      (GLOBAL_DBNAME = clone)
      (ORACLE_HOME = D:\oracle\product\11.1.0\db_1)
      (SID_NAME = clone)
    )
and perform following steps
C:\> lsnrctl
LSNRCTL> reload
 or
LSNRCTL> stop
LSNRCTL> start
LSNRCTL> exit
Now add following entries in tnsnames.ora
CLONE =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = xxxx)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = clone)
    )
  )
 or we can also use netca command to configure the tns .
check tns entries as below :
C:\> tnsping clone
TNS Ping Utility for 32-bit Windows: Version 11.1.0.6.0 - Production on 22-APR-2011 11:33:04
Copyright (c) 1997, 2007, Oracle.  All rights reserved.
Used parameter files:
D:\oracle\product\11.1.0\db_1\network\admin\sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = xxxx)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = clone)))
OK (30 msec)


Step 6 : Now Connect with Duplicate Database.
c:\> set ORACLE_SID=clone
c:\>sqlplus / as sysdba
SQL*Plus: Release 11.1.0.6.0 - Production on Fri Apr 22 10:44:14 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
Connected to an idle instance.
SQL> startup nomount pfile='C:\clone.ora';
ORACLE instance started.
Total System Global Area  318046208 bytes
Fixed Size                  1332920 bytes
Variable Size             234883400 bytes
Database Buffers           75497472 bytes
Redo Buffers                6332416 bytes
SQL> create spfile from pfile='C:\clone.ora';
SQL>shut immediate
ORA-01507: database not mounted
ORACLE instance shut down.
SQL> startup nomount
ORACLE instance started.
Total System Global Area  318046208 bytes
Fixed Size                  1332920 bytes
Variable Size             234883400 bytes
Database Buffers           75497472 bytes
Redo Buffers                6332416 bytes
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options


Step 7 : Now  Duplicate the Target Database .
C:\>rman target sys/ramtech@noida auxiliary sys/clone@clone
Recovery Manager: Release 11.1.0.6.0 - Production on Fri Apr 22 11:16:38 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
connected to target database:      NOIDA (DBID=1502483083)
connected to auxiliary database:  CLONE (not mounted)
RMAN> duplicate target database to "clone" nofilenamecheck;
Starting Duplicate Db at 22-APR-11
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=170 device type=DISK
contents of Memory Script:
{
   set until scn  1955915;
   set newname for datafile  1 to
 "D:\ORACLE\ORADATA\CLONE\SYSTEM01.DBF";
   set newname for datafile  2 to
 "D:\ORACLE\ORADATA\CLONE\SYSAUX01.DBF";
   set newname for datafile  3 to
 "D:\ORACLE\ORADATA\CLONE\UNDOTBS01.DBF";
   set newname for datafile  4 to
 "D:\ORACLE\ORADATA\CLONE\USERS01.DBF";
   set newname for datafile  5 to
 "D:\ORACLE\ORADATA\CLONE\EXAMPLE01.DBF";
   set newname for datafile  6 to
 "D:\ORACLE\ORADATA\CLONE\TRANS.DBF";
   restore
   clone database
   ;
}
executing Memory Script
executing command: SET until clause
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
Starting restore at 22-APR-11
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to D:\ORACLE\ORADATA\CLONE\SYSTEM01.DBF
channel ORA_AUX_DISK_1: restoring datafile 00002 to D:\ORACLE\ORADATA\CLONE\SYSAUX01.DBF
channel ORA_AUX_DISK_1: restoring datafile 00003 to D:\ORACLE\ORADATA\CLONE\UNDOTBS01.DBF
channel ORA_AUX_DISK_1: restoring datafile 00004 to D:\ORACLE\ORADATA\CLONE\USERS01.DBF
channel ORA_AUX_DISK_1: restoring datafile 00005 to D:\ORACLE\ORADATA\CLONE\EXAMPLE01.DBF
channel ORA_AUX_DISK_1: restoring datafile 00006 to D:\ORACLE\ORADATA\CLONE\TRANS.DBF
channel ORA_AUX_DISK_1: reading from backup piece D:\ORACLE\FLASH_RECOVERY_AREA\NOIDA\BACKUPSET\2011_04_21\O1_MF_NNNDF_TAG20110421T134444_6TZSWBRW_.BKP
channel ORA_AUX_DISK_1: piece handle=D:\ORACLE\FLASH_RECOVERY_AREA\NOIDA\BACKUPSET\2011_04_21\O1_MF_NNNDF_TAG20110421T134444_6TZSWBRW_.BKP tag=TAG20110421T134444
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:03:35
Finished restore at 22-APR-11
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "CLONE" RESETLOGS ARCHIVELOG
  MAXLOGFILES     37
  MAXLOGMEMBERS      3
  MAXDATAFILES       10
  MAXINSTANCES     1
  MAXLOGHISTORY      292
 LOGFILE
  GROUP  1 ( 'D:\ORACLE\ORADATA\CLONE\REDO01.LOG' ) SIZE 50 M  REUSE,
  GROUP  2 ( 'D:\ORACLE\ORADATA\CLONE\REDO02.LOG' ) SIZE 50 M  REUSE,
  GROUP  3 ( 'D:\ORACLE\ORADATA\CLONE\REDO03.LOG' ) SIZE 50 M  REUSE
 DATAFILE
  'D:\ORACLE\ORADATA\CLONE\SYSTEM01.DBF'
 CHARACTER SET WE8MSWIN1252
contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script
datafile 2 switched to datafile copy
input datafile copy RECID=1 STAMP=749128842 file name=D:\ORACLE\ORADATA\CLONE\SYSAUX01.DBF
datafile 3 switched to datafile copy
input datafile copy RECID=2 STAMP=749128842 file name=D:\ORACLE\ORADATA\CLONE\UNDOTBS01.DBF
datafile 4 switched to datafile copy
input datafile copy RECID=3 STAMP=749128843 file name=D:\ORACLE\ORADATA\CLONE\USERS01.DBF
datafile 5 switched to datafile copy
input datafile copy RECID=4 STAMP=749128843 file name=D:\ORACLE\ORADATA\CLONE\EXAMPLE01.DBF
datafile 6 switched to datafile copy
input datafile copy RECID=5 STAMP=749128843 file name=D:\ORACLE\ORADATA\CLONE\TRANS.DBF
contents of Memory Script:
{
   set until scn  1955915;
   recover
   clone database
    delete archivelog
   ;
}
executing Memory Script
executing command: SET until clause
Starting recover at 22-APR-11
using channel ORA_AUX_DISK_1
starting media recovery
archived log for thread 1 with sequence 47 is already on disk as file D:\ARCHIVE\ARC00047_0748802215.001
archived log for thread 1 with sequence 48 is already on disk as file D:\ARCHIVE\ARC00048_0748802215.001
archived log for thread 1 with sequence 49 is already on disk as file D:\ARCHIVE\ARC00049_0748802215.001
archived log for thread 1 with sequence 50 is already on disk as file D:\ARCHIVE\ARC00050_0748802215.001
archived log for thread 1 with sequence 51 is already on disk as file D:\ARCHIVE\ARC00051_0748802215.001
archived log for thread 1 with sequence 52 is already on disk as file D:\ARCHIVE\ARC00052_0748802215.001
archived log for thread 1 with sequence 53 is already on disk as file D:\ARCHIVE\ARC00053_0748802215.001
archived log for thread 1 with sequence 54 is already on disk as file D:\ARCHIVE\ARC00054_0748802215.001
archived log for thread 1 with sequence 55 is already on disk as file D:\ARCHIVE\ARC00055_0748802215.001
archived log for thread 1 with sequence 56 is already on disk as file D:\ARCHIVE\ARC00056_0748802215.001
archived log for thread 1 with sequence 57 is already on disk as file D:\ARCHIVE\ARC00057_0748802215.001
archived log for thread 1 with sequence 58 is already on disk as file D:\ARCHIVE\ARC00058_0748802215.001
archived log for thread 1 with sequence 59 is already on disk as file D:\ARCHIVE\ARC00059_0748802215.001
archived log file name=D:\ARCHIVE\ARC00047_0748802215.001 thread=1 sequence=47
archived log file name=D:\ARCHIVE\ARC00048_0748802215.001 thread=1 sequence=48
archived log file name=D:\ARCHIVE\ARC00049_0748802215.001 thread=1 sequence=49
archived log file name=D:\ARCHIVE\ARC00050_0748802215.001 thread=1 sequence=50
archived log file name=D:\ARCHIVE\ARC00051_0748802215.001 thread=1 sequence=51
archived log file name=D:\ARCHIVE\ARC00052_0748802215.001 thread=1 sequence=52
archived log file name=D:\ARCHIVE\ARC00053_0748802215.001 thread=1 sequence=53
archived log file name=D:\ARCHIVE\ARC00054_0748802215.001 thread=1 sequence=54
archived log file name=D:\ARCHIVE\ARC00055_0748802215.001 thread=1 sequence=55
archived log file name=D:\ARCHIVE\ARC00056_0748802215.001 thread=1 sequence=56
archived log file name=D:\ARCHIVE\ARC00057_0748802215.001 thread=1 sequence=57
archived log file name=D:\ARCHIVE\ARC00058_0748802215.001 thread=1 sequence=58
archived log file name=D:\ARCHIVE\ARC00059_0748802215.001 thread=1 sequence=59
media recovery complete, elapsed time: 00:01:35
Finished recover at 22-APR-11


contents of Memory Script:
{
   shutdown clone immediate;
   startup clone nomount ;
}
executing Memory Script
database dismounted
Oracle instance shut down
connected to auxiliary database (not started)
Oracle instance started
Total System Global Area     318046208 bytes
Fixed Size                     1332920 bytes
Variable Size                234883400 bytes
Database Buffers              75497472 bytes
Redo Buffers                   6332416 bytes
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "CLONE" RESETLOGS ARCHIVELOG
  MAXLOGFILES     37
  MAXLOGMEMBERS      3
  MAXDATAFILES       10
  MAXINSTANCES     1
  MAXLOGHISTORY      292
 LOGFILE
  GROUP  1 ( 'D:\ORACLE\ORADATA\CLONE\REDO01.LOG' ) SIZE 50 M  REUSE,
  GROUP  2 ( 'D:\ORACLE\ORADATA\CLONE\REDO02.LOG' ) SIZE 50 M  REUSE,
  GROUP  3 ( 'D:\ORACLE\ORADATA\CLONE\REDO03.LOG' ) SIZE 50 M  REUSE
 DATAFILE
  'D:\ORACLE\ORADATA\CLONE\SYSTEM01.DBF'
 CHARACTER SET WE8MSWIN1252
contents of Memory Script:
{
   set newname for tempfile  1 to
 "D:\ORACLE\ORADATA\CLONE\TEMP02.DBF";
   switch clone tempfile all;
   catalog clone datafilecopy  "D:\ORACLE\ORADATA\CLONE\SYSAUX01.DBF";
   catalog clone datafilecopy  "D:\ORACLE\ORADATA\CLONE\UNDOTBS01.DBF";
   catalog clone datafilecopy  "D:\ORACLE\ORADATA\CLONE\USERS01.DBF";
   catalog clone datafilecopy  "D:\ORACLE\ORADATA\CLONE\EXAMPLE01.DBF";
   catalog clone datafilecopy  "D:\ORACLE\ORADATA\CLONE\TRANS.DBF";
   switch clone datafile all;
}
executing Memory Script
executing command: SET NEWNAME
renamed tempfile 1 to D:\ORACLE\ORADATA\CLONE\TEMP02.DBF in control file
cataloged datafile copy
datafile copy file name=D:\ORACLE\ORADATA\CLONE\SYSAUX01.DBF RECID=1 STAMP=749128969
cataloged datafile copy
datafile copy file name=D:\ORACLE\ORADATA\CLONE\UNDOTBS01.DBF RECID=2 STAMP=749128970
cataloged datafile copy
datafile copy file name=D:\ORACLE\ORADATA\CLONE\USERS01.DBF RECID=3 STAMP=749128970
cataloged datafile copy
datafile copy file name=D:\ORACLE\ORADATA\CLONE\EXAMPLE01.DBF RECID=4 STAMP=749128970
cataloged datafile copy
datafile copy file name=D:\ORACLE\ORADATA\CLONE\TRANS.DBF RECID=5 STAMP=749128971
datafile 2 switched to datafile copy
input datafile copy RECID=1 STAMP=749128969 file name=D:\ORACLE\ORADATA\CLONE\SYSAUX01.DBF
datafile 3 switched to datafile copy
input datafile copy RECID=2 STAMP=749128970 file name=D:\ORACLE\ORADATA\CLONE\UNDOTBS01.DBF
datafile 4 switched to datafile copy
input datafile copy RECID=3 STAMP=749128970 file name=D:\ORACLE\ORADATA\CLONE\USERS01.DBF
datafile 5 switched to datafile copy
input datafile copy RECID=4 STAMP=749128970 file name=D:\ORACLE\ORADATA\CLONE\EXAMPLE01.DBF
datafile 6 switched to datafile copy
input datafile copy RECID=5 STAMP=749128971 file name=D:\ORACLE\ORADATA\CLONE\TRANS.DBF
contents of Memory Script:
{
   Alter clone database open resetlogs;
}
executing Memory Script
database opened
Finished Duplicate Db at 22-APR-11
RMAN> exit
Recovery Manager complete.

Step  8 : Check the Duplicate "clone"  Database  .
C:\>sqlplus sys/clone@clone as sysdba
SQL*Plus: Release 11.1.0.6.0 - Production on Fri Apr 22 11:43:05 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select name,open_mode from v$database;
NAME      OPEN_MODE
--------- ----------
CLONE     READ WRITE




Enjoy   J J J



Wednesday, April 20, 2011

What's New in Backup and Recovery ?

The new features of rman in oracle 11g greatly improve the manageability of Oracle Database backup and recovery. In particular, manageability is increased by the introduction of Data Recovery Advisor, better integration of RMAN with Data Guard, expansion of the recovery catalog functionality, and improved management of archived redo logs.
The new features in this release include:

1.) Data Recovery Advisor :  Data Recovery Advisor is a built-in tool to automatically diagnose data failures and recommend repairs. We can repair failures manually or request that they be repaired automatically. Data Recovery Advisor supports the LIST FAILURE, CHANGE FAILURE, ADVISE FAILURE, and REPAIR FAILURE commands.

2.) Improved integration with Data Guard  :  We can now set persistent RMAN configurations for a primary or physical standby database when RMAN is not connected as TARGET to the database. RMAN works seamlessly on all databases in the Data Guard environment, enabling us to use backups made on one database for restore and recovery on another database. The same recovery catalog can manage metadata for all primary and standby databases.

3.) Improved handling of long-term backups :  We can create a long-term or archival backup with BACKUP ... KEEP that retains only the archived log files needed to make the backup consistent.

4.) Backup failover for archived redo logs in the flash recovery area :  When backing up archived redo log files located in the flash recovery area, RMAN can fail over to archiving destinations outside the recovery area. RMAN can use an intact copy of an archived log in an alternative location to continue writing backups when a log in the recovery area is missing or corrupted.

5.) Archived log deletion policy enhancements :  When we CONFIGURE an archived log deletion policy, the configuration applies to all archiving destinations, including the flash recovery area. Both BACKUP ... DELETE INPUT and DELETE ... ARCHIVELOG obey this configuration, as does the flash recovery area. We can also CONFIGURE an archived redo log deletion policy so that logs are eligible for deletion only after being applied to or transferred to standby database destinations. We can set the policy for mandatory standby destinations only, or for any standby destinations.

6.) Network-enabled database duplication without backups  We can use the DUPLICATE command to create a duplicate database or physical standby database over the network without a need for pre-existing database backups. This form of duplication is called active database duplication.

7.) Recovery catalog enhancements :  The owner of a recovery catalog can GRANT or REVOKE access to a subset of the catalog to other database users in the same recovery catalog database. This subset is called a virtual private catalog. We can also use IMPORTCATALOG command to merge one recovery catalog (or metadata for specific databases in the catalog) into another recovery catalog.

8.) Multisection backups :  RMAN can back up a single file in parallel by dividing the work among multiple channels. Each channel backs up one file section. We create a multisection backup by specifying SECTION SIZE on the BACKUP command. Restoring a multisection backup in parallel is automatic and requires no option. We can parallelize validations of a file with VALIDATE ... SECTION SIZE.

9.)  Undo optimizationThe BACKUP command does not back up undo that is not needed for recovery of a backup. Undo is not needed if it was generated for a transaction that has already committed. This undo can represent the majority of undo in the database.

10.) Improved block media recovery performance  :  When performing block media recovery, RMAN automatically searches the flashback logs, if they are available, for the required blocks before searching backups. Using blocks from the flashback logs can significantly improve block media recovery performance.

11.) Improved block corruption detection  :  Several database components and utilities, including RMAN, can now detect a corrupt block and record it in V$DATABASE_BLOCK_CORRUPTION. When instance recovery detects a corrupt block, it records it in this view automatically. Oracle Database automatically updates this view when block corruptions are detected or repaired. The VALIDATE command is enhanced with many new options such as VALIDATE ... BLOCK and VALIDATE DATABASE.

12.) Faster backup compression  :  In addition to the existing BZIP2 algorithm for binary compression of backups, RMAN also supports the ZLIB algorithm. ZLIB runs faster than BZIP2, but produces larger files. ZLIB requires the Oracle Advanced Compression option. We can use the CONFIGURE COMPRESSION ALGORITHM command to choose between BZIP2(default) and ZLIB for RMAN backups.

13.) Block change tracking support for standby databases We can enable block change tracking on a physical standby database. When We back up the standby database, RMAN can use the block change tracking file to quickly identify the blocks that changed since the last incremental backup.

14.) Improved scripting with RMAN substitution variables  : We can create RMAN command files and stored scripts that accept user input at runtime. Thus, backup scripts can use RMAN substitution variables for tags, filenames, restore point names, and so on.

15.) Integration with VSS-enabled applications  : The Oracle VSS writer is integrated with applications that use the Volume Shadow Copy Service (VSS) infrastructure on Windows. We can use VSS-enabled software and storage systems to back up and restore an Oracle database. A key benefit is the ability to make a shadow copy of an open database.

16.) Lost write detection  We can enable the DB_LOST_WRITE_PROTECT initialization parameter to detect a lost write during managed recovery of a standby database or media recovery of a primary database. Lost write detection is disabled by default.

17.) Backup of read-only transportable tablespaces   :  In previous releases, RMAN could not back up transportable tablespaces until they were made read/write at the destination database. Now RMAN can back up transportable tablespaces when they are not read/write and restore these backups.

18.) Backup and recovery enhancements in Oracle Enterprise Manager : Enterprise Manager includes an interface for Data Recovery Advisor.

19.) Oracle Flashback Transaction : We can reverse a transaction. Oracle Database determines the dependencies between transactions and in effect creates a compensating transaction that reverses the unwanted changes. The database rewinds to a state as if the transaction, and any transactions that could be dependent on it, never occurred.

20.) Flashback data archive A flashback data archive enables the database to automatically track and store all transactional changes to a table for the duration of its lifetime. Thus, we do not need to build this functionality into database applications. A flashback data archive is especially useful for compliance, audit reports, data analysis, and DSS (Decision Support Systems). We can use some of the logical flashback features with a flashback data archive to access data from far in the past.

21.) Improved media recovery performance for databases on SMP systems  : Media recovery of databases on symmetric multiprocessing (SMP) computers is now faster. The performance improvements include the following:
  • More parallelism
  • More efficient asynchronous redo read, parse, and apply
  • Fewer synchronization points in the parallel apply algorithm
  • The media recovery checkpoint at a redo log boundary no longer blocks the apply of the next log.
No configuration is necessary, although we can use new parallel recovery wait events for tuning if the default apply rate is not satisfactory.
For more details of above  points click here




ENJOY    J J J



Manually Creating an Oracle 11g Database

Creating manually oracle 11g database is just the same as 10g . Here are steps to create the oracle Database manually .

Step 1:  Create initialization parameter file name 
Create initialization parameter file name as initdelhi.ora .Below are the contents of my initdelhi.ora:-

audit_file_dest='D:\oracle\admin\delhi\adump'
compatible='11.1.0.0.0'
control_files='D:\ORACLE\ORADATA\DELHI\CONTROL01.CTL'
db_block_size=8192
db_name='delhi'
diagnostic_dest='D:\oracle'
remote_login_passwordfile='EXCLUSIVE'
undo_tablespace='UNDOTBS1'

Step 2 :  Create Oracle service

C:\>  oradim    -new    -sid    delhi    -startmode  m     -intpwd    oracle 
 Instance created.

Step 3: Create the folder 
Create the folder as specified location in pfile (i.e in oradata and admin folder) .

Step 4Create Database

C:\>set ORACLE_SID=delhi
C:\>sqlplus / as sysdba
SQL*Plus: Release 11.1.0.6.0 - Production on Wed Apr 20 12:43:03 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> startup nomount pfile='D:\oracle\admin\delhi\pfile\initdelhi.ora';
ORACLE instance started.
Total System Global Area  150667264 bytes
Fixed Size                  1331740 bytes
Variable Size              92278244 bytes
Database Buffers           50331648 bytes
Redo Buffers                6725632 bytes

SQL>  create database
  2  datafile 'D:\oracle\oradata\delhi\system01.dbf' size 1G
  3  sysaux datafile 'D:\oracle\oradata\delhi\sysaux.dbf' size 800M
  4  UNDO TABLESPACE "UNDOTBS1"
  5  DATAFILE 'D:\oracle\oradata\delhi\UNDOTBS01.DBF' size 200m
  6  LOGFILE group 1 'D:\oracle\oradata\delhi\REDO01.LOG' SIZE 100M,
  7  group 2'D:\oracle\oradata\delhi\REDO02.LOG' SIZE 100M,
  8  group 3'D:\oracle\oradata\delhi\REDO03.LOG' SIZE 100M;

Database created.

Step 5 : Run the following scripts

C:\>@ D:\oracle\product\11.1.0\db_1\RDBMS\ADMIN\catalog.sql
C:\>@
D:\oracle\product\11.1.0\db_1\RDBMS\ADMIN\catproc.sql

SQL> select name,open_mode from v$database;
NAME      OPEN_MODE
--------- ----------
DELHI     READ WRITE
SQL> select program from v$session;
PROGRAM
sqlplus.exe
ORACLE.EXE (q000)
ORACLE.EXE (QMNC)
ORACLE.EXE (W000)
ORACLE.EXE (FBDA)
ORACLE.EXE (SMCO)
ORACLE.EXE (DBRM)
ORACLE.EXE (PMON)
ORACLE.EXE (DIAG)
ORACLE.EXE (DBW0)
ORACLE.EXE (CKPT)
ORACLE.EXE (SMON)
ORACLE.EXE (RECO)
ORACLE.EXE (MMON)
ORACLE.EXE (MMNL)
ORACLE.EXE (LGWR)
ORACLE.EXE (VKTM)
ORACLE.EXE (DIA0)
ORACLE.EXE (MMAN)
ORACLE.EXE (PSP0)
sqlplus.exe
21 rows selected.



Enjoy         J J J


Table_exists_action Parameter of Data Pump

Sometimes it happens that we need to import table into an existing table.If we import the table in that schemas it throws error regarding the existence of the particular table.If we have to preserve the old data of table and append the new data,we can use the table_exists_action parameter of data pump.The valid key words are {SKIP | APPEND | TRUNCATE | REPLACE}.

The possible values of the following effects are :

1.) SKIP  : leaves the table as is and moves on to the next object. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.By default the value is SKIP .

2.) APPEND loads rows from the source and leaves existing rows unchanged.

3.) TRUNCATE deletes existing rows and then loads rows from the source.

4.) REPLACE drops the existing table and then creates and loads it from the source. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.

Here is a  DEMO of the TABLE_EXISTS_ACTION parameter  :

First of all we will take the export table (say test ) which is in neer schemas.

C:\>expdp system/xxxx@noida directory=dpump tables=neer.test dumpfile=neer_test.dmp logfile=exp_neerlog.log
Export: Release 11.1.0.6.0 - Production on Tuesday, 19 April, 2011 13:21:28
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/********@noida directory=dpump tables=neer.test dumpfile=neer_test.dmp logfile=exp_neerlog.log
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "NEER"."TEST"                               5.062 KB       9 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  D:\DPUMP\NEER_TEST.DMP
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 13:23:13

Now we consider each of the valid keywords of  action_exists_append parameter.

Case 1 : action_exists_append=skip (by defaults)

C:\>impdp system/xxxx@noida directory=dpump full=y  dumpfile=neer_test.dmp logfile=imp_neerlog.log
Import: Release 11.1.0.6.0 - Production on Tuesday, 19 April, 2011 13:32:22
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@noida directory=dpump full=y dumpfile=neer_test.dmp logfile=imp_neerlog.log
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39151: Table "NEER"."TEST" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 13:32:35

Hence, above results shows that the table is skipped .

Case 2 : table_exists_action=append  
Now we delete the table test and recreate populate it values.

SQL> drop table test;
Table dropped.
SQL> create table test (id number);
Table created.

SQL> insert into test values (&Y);
Enter value for y: 111
old   1: insert into test values (&Y)
new   1: insert into test values (123)
1 row created.

SQL> /
Enter value for y: 222
old   1: insert into test values (&Y)
new   1: insert into test values (234)
1 row created.

SQL> /
Enter value for y: 333
old   1: insert into test values (&Y)
new   1: insert into test values (345)
1 row created.

SQL> commit;
Commit complete.

SQL> select * from test;
        ID
----------
       111
       222
       333

Now we will import the dump in neer schemas having table "test"

SQL>HOST impdp system/xxxx@noida directory=dpump dumpfile=NEER_TEST.dmp table_exists_action=append
Import: Release 11.1.0.6.0 - Production on Tuesday, 19 April, 2011 14:22:39
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@noida directory=dpump dumpfile=NEER_TEST.dmp table_exists_action=append
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39152: Table "NEER"."TEST" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
Processing     object type         TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "NEER"."TEST"                               5.062 KB       9 rows
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 14:22:58

SQL> select * from test;
        ID
----------
       111
       222
       333
        11
        22
        33
        44
        55
        66
        77
        88

        ID
----------
        99
12 rows selected.

Hence we find that the imported table appends in existing table . This parameter only import the data of the tables and skips the indexes  .

Case 3 :  table_exists_action=truncate
we have already 12 rows in table "test" .Now we again import the dump having 9 rows.

SQL>host impdp system/xxxx@noida directory=dpump dumpfile=NEER_TEST.dmp table_exists_action=truncate
Import: Release 11.1.0.6.0 - Production on Tuesday, 19 April, 2011 14:26:35
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@noida directory=dpump dumpfile=NEER_TEST.dmp table_exists_action=truncate
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39153: Table "NEER"."TEST" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "NEER"."TEST"                               5.062 KB       9 rows
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 14:26:51

SQL> select * from test;
        ID
----------
        11
        22
        33
        44
        55
        66
        77
        88
        99
9 rows selected.

Case 4  : table_exists_action= replace 
Now we will add few rows in table "test" to check the results.

SQL> insert into test values(1234);
1 row created.
SQL> insert into test values(12345);
1 row created.
SQL> insert into test values(34567);
1 row created.
SQL> commit;
Commit complete.

SQL> select * from test;
        ID
----------
        11
        22
        33
        44
        55
        66
        77
        88
        99
      1234
     12345
     34567
12 rows selected.

SQL>host impdp system/xxxx@noida directory=dpump dumpfile=NEER_TEST.dmp table_exists_action=replace
Import: Release 11.1.0.6.0 - Production on Tuesday, 19 April, 2011 14:33:23
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@noida directory=dpump dumpfile=NEER_TEST.dmp table_exists_action=replace
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "NEER"."TEST"                               5.062 KB       9 rows
Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at 14:33:42

SQL> select  *  from test ; 
        ID
----------
        11
        22
        33
        44
        55
        66
        77
        88
        99
9 rows selected.

Hence, table_exists_action=replace parameter internally drop and recreate the table .Hence all the existing metadata also  get dropped and is recreated .
Note: Parameter table_exists_action=replace for a job with no metadata will not get imported .



Enjoy   J J J