Friday, April 22, 2011

What is the difference between V$ and GV$, also V$ and V_$ ?

These “$” views are called dynamic performance views. They are continuously updated while a database is open and in use, and their contents relate primarily to performance. The actual dynamic performance views are identified by the prefix V_$ .  Public synonyms for these views have the prefix V$ . We should access only the V$ objects, not the V_$ objects . Let's  have look on the object type  .
SQL>select owner, object_name, object_type  from dba_objects where object_name like '%SESSION'   and  object_name like 'V%' ;
Output :
OWNER        OBJECT_NAME               OBJECT_TYPE    
-----------     ---------------------           ------------------------
SYS              V_$SESSION                           VIEW
SYS              V_$HS_SESSION                     VIEW
SYS              V_$PX_SESSION                     VIEW
SYS              V_$DETACHED_SESSION         VIEW
SYS              V_$LOGMNR_SESSION             VIEW
SYS              V_$DATAPUMP_SESSION         VIEW
SYS              V$XS_SESSION                       VIEW
PUBLIC        V$SESSION                           SYNONYM
PUBLIC        V$HS_SESSION                     SYNONYM
PUBLIC        V$PX_SESSION                     SYNONYM
PUBLIC        V$DETACHED_SESSION         SYNONYM
PUBLIC        V$LOGMNR_SESSION             SYNONYM
PUBLIC        V$DATAPUMP_SESSION         SYNONYM
PUBLIC        V$XS_SESSION                     SYNONYM
Hence "v$xx" objects are synonym and "v$_xx"  are the views.


Let's have a look on the difference between V$ view and GV$ views . 
GV$ views are called 'Global Dymanic Performance views' and retrieve information about all started instances accessing one RAC database. In contrast, standard dynamic performance views (V$ views) retrieve information about the local instance only. For each of the V$ views available, there is a corresponding GV$ view except for a few exceptions. In addition to the V$ information, each GV$ view possesses an additional column name INST_ID. The INST_ID column displays the instance number from which the associated V$ view information is obtained. 
GV$  views use a special form of parallel execution. The parallel execution coordinator runs on the instance that the client connects to and one slave is allocated in each instance to query the underlying V$ view for that instance.


Enjoy     :-)  

Track Redo Generation per hours and days

Here is the scripts for Tracking  Redo Generation per  Hours and by Days .

Track redo generation by day

SQL>select trunc(completion_time) rundate ,count(*) logswitch
,round((sum(blocks*block_size)/1024/1024)) “REDO PER DAY (MB)”
from v$archived_log  group by trunc(completion_time) order by 1;

Sample Output  :
RUNDATE          LOGSWITCH       REDO PER DAY (MB)
-------------      ------------------    ----------------------
18-APR-11         2                          1
19-APR-11         5                          230
20-APR-11         36                        1659
21-APR-11         14                        175
22-APR-11          5                         147

Track the Amount of Redo Generated per Hour :

SQL> SELECT  Start_Date,   Start_Time,   Num_Logs,
Round(Num_Logs * (Vl.Bytes / (1024 * 1024)),2) AS Mbytes, Vdb.NAME AS Dbname
FROM (SELECT To_Char(Vlh.First_Time, 'YYYY-MM-DD') AS Start_Date,
To_Char(Vlh.First_Time, 'HH24') || ':00' AS Start_Time, COUNT(Vlh.Thread#) Num_Logs
FROM V$log_History Vlh 
GROUP BY To_Char(Vlh.First_Time,  'YYYY-MM-DD'),
To_Char(Vlh.First_Time, 'HH24') || ':00') Log_Hist,
V$log Vl ,  V$database Vdb
WHERE Vl.Group# = 1
ORDER BY Log_Hist.Start_Date, Log_Hist.Start_Time;

Sample Output :
START_DATE START     NUM_LOGS  MBYTES     DBNAME
-------------------------    ----------      ---------      ---------
2011-04-18 16:00          1              50              NOIDA
2011-04-18 17:00          2              100            NOIDA
2011-04-19 00:00          1              50              NOIDA
2011-04-19 09:00          1              50              NOIDA
2011-04-19 14:00          1              50              NOIDA
2011-04-19 20:00          1              50              NOIDA
2011-04-19 23:00          1              50              NOIDA
2011-04-20 06:00          1              50              NOIDA
2011-04-20 10:00          5              250            NOIDA
2011-04-20 11:00          8              400            NOIDA
2011-04-20 12:00         21            1050           NOIDA
2011-04-20 14:00          1             50               NOIDA
2011-04-21 09:00          1             50               NOIDA
2011-04-21 13:00          3            150              NOIDA
2011-04-21 15:00          1            50                NOIDA
2011-04-21 17:00          8            40                NOIDA
2011-04-21 22:00          1            50                NOIDA
2011-04-22 00:00          1            50                NOIDA
2011-04-22 05:00          1            50                NOIDA
2011-04-22 14:00          2           100              NOIDA



Enjoy      :-) 


Recovery from complete loss of all online redo log files using RMAN


The best practices dictates that to avoid such scenarios, we should be multiplexing the online redo log files. Each group should have at least 2 members and each member should be located on a different physical disk.

Check the number of total redologs files
SQL> select member from v$Logfile;
MEMBER
--------------------------------------
D:\ORACLE\ORADATA\NOIDA\REDO01.LOG
D:\ORACLE\ORADATA\NOIDA\REDO03.LOG
D:\ORACLE\ORADATA\NOIDA\REDO02.LOG

If one or all of the online redo logfiles are delete then the database hangs and in the alert log file we can see the following error message:
ORA-00313: open failed for members of log group 1 of thread 1
ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\NOIDA\REDO01.LOG'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified
The file is missing at the operating system level.

Using RMAN we can recover from this error by restoring the database from the backup and recovering to the last available archived redo logfile.
SQL> select group#,sequence#,first_change#,status  from v$log 
GROUP#    SEQUENCE#         FIRST_CHANGE#        STATUS
----------      ----------              -------------                    ---------------
         1         61                        1997353                            CURRENT
         3         60                        1955915                            INACTIVE
         2         59                        1919750                            INACTIVE

Shutdown the database

SQL> shut immediate
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
Total System Global Area  318046208 bytes
Fixed Size                  1332920 bytes
Variable Size             255854920 bytes
Database Buffers           54525952 bytes
Redo Buffers                6332416 bytes
Database mounted.
SQL> exit

Using RMAN connect to the target database:
 C:\>rman target sys/ramtech@noida
Recovery Manager: Release 11.1.0.6.0 - Production on Fri Apr 22 14:49:25 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
connected to target database: NOIDA (DBID=1502483083, not open)

 RMAN> restore database;
Starting restore at 22-APR-11
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to D:\ORACLE\ORADATA\NOIDA\SYSTEM01.DBF
channel ORA_DISK_1: restoring datafile 00002 to D:\ORACLE\ORADATA\NOIDA\SYSAUX01.DBF
channel ORA_DISK_1: restoring datafile 00003 to D:\ORACLE\ORADATA\NOIDA\UNDOTBS01.DBF
channel ORA_DISK_1: restoring datafile 00004 to D:\ORACLE\ORADATA\NOIDA\USERS01.DBF
channel ORA_DISK_1: restoring datafile 00005 to D:\ORACLE\ORADATA\NOIDA\EXAMPLE01.DBF
channel ORA_DISK_1: restoring datafile 00006 to D:\ORACLE\ORADATA\NOIDA\TRANS.DBF
channel ORA_DISK_1: reading from backup piece D:\ORACLE\FLASH_RECOVERY_AREA\NOIDA\BACKUPSET\2011_04_21\O1_MF_NNNDF_TAG20110421T134444_6TZSWBRW_.BKP
channel ORA_DISK_1: piece handle=D:\ORACLE\FLASH_RECOVERY_AREA\NOIDA\BACKUPSET\2011_04_21\O1_MF_NNNDF_TAG20110421T134444_6TZSWBRW_.BKP tag=TAG20110421T134444
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:03:05
Finished restore at 22-APR-11

RMAN> recover database until sequence 61;
Starting recover at 22-APR-11
using channel ORA_DISK_1
starting media recovery
archived log for thread 1 with sequence 47 is already on disk as file D:\ARCHIVE\ARC00047_0748802215.001
archived log for thread 1 with sequence 48 is already on disk as file D:\ARCHIVE\ARC00048_0748802215.001
archived log for thread 1 with sequence 49 is already on disk as file D:\ARCHIVE\ARC00049_0748802215.001
archived log for thread 1 with sequence 50 is already on disk as file D:\ARCHIVE\ARC00050_0748802215.001
archived log for thread 1 with sequence 51 is already on disk as file D:\ARCHIVE\ARC00051_0748802215.001
archived log for thread 1 with sequence 52 is already on disk as file D:\ARCHIVE\ARC00052_0748802215.001
archived log for thread 1 with sequence 53 is already on disk as file D:\ARCHIVE\ARC00053_0748802215.001
archived log for thread 1 with sequence 54 is already on disk as file D:\ARCHIVE\ARC00054_0748802215.001
archived log for thread 1 with sequence 55 is already on disk as file D:\ARCHIVE\ARC00055_0748802215.001
archived log for thread 1 with sequence 56 is already on disk as file D:\ARCHIVE\ARC00056_0748802215.001
archived log for thread 1 with sequence 57 is already on disk as file D:\ARCHIVE\ARC00057_0748802215.001
archived log for thread 1 with sequence 58 is already on disk as file D:\ARCHIVE\ARC00058_0748802215.001
archived log for thread 1 with sequence 59 is already on disk as file D:\ARCHIVE\ARC00059_0748802215.001
archived log for thread 1 with sequence 60 is already on disk as file D:\ARCHIVE\ARC00060_0748802215.001
archived log file name=D:\ARCHIVE\ARC00047_0748802215.001 thread=1 sequence=47
archived log file name=D:\ARCHIVE\ARC00048_0748802215.001 thread=1 sequence=48
archived log file name=D:\ARCHIVE\ARC00049_0748802215.001 thread=1 sequence=49
archived log file name=D:\ARCHIVE\ARC00050_0748802215.001 thread=1 sequence=50
archived log file name=D:\ARCHIVE\ARC00051_0748802215.001 thread=1 sequence=51
archived log file name=D:\ARCHIVE\ARC00052_0748802215.001 thread=1 sequence=52
archived log file name=D:\ARCHIVE\ARC00053_0748802215.001 thread=1 sequence=53
archived log file name=D:\ARCHIVE\ARC00054_0748802215.001 thread=1 sequence=54
archived log file name=D:\ARCHIVE\ARC00055_0748802215.001 thread=1 sequence=55
archived log file name=D:\ARCHIVE\ARC00056_0748802215.001 thread=1 sequence=56
archived log file name=D:\ARCHIVE\ARC00057_0748802215.001 thread=1 sequence=57
archived log file name=D:\ARCHIVE\ARC00058_0748802215.001 thread=1 sequence=58
archived log file name=D:\ARCHIVE\ARC00059_0748802215.001 thread=1 sequence=59
archived log file name=D:\ARCHIVE\ARC00060_0748802215.001 thread=1 sequence=60
media recovery complete, elapsed time: 00:02:01
Finished recover at 22-APR-11
RMAN> alter database open resetlogs;
database opened
 RMAN>exit

The recovery process creates the online redo logfiles at the operating system level also.

Since we have done an incomplete recover with open resetlogs, we should take a fresh complete backup of the database.
NOTE: Please make sure you remove all the old archived logfiles from the archived area.

Enjoy  JJJ

Cloning A Database On The Same Server Using Rman

A nice feature of RMAN is the ability to duplicate, or clone, a database from a previous backup. It is possible to create a duplicate database on a remote server with the same file structure, a remote server will a different file structure or the local server with a different file structure. In this article I'll demonstrate how to duplicate a database on the local server .This can prove useful when we want to recover selected objects from a backup, rather than roll back a whole database or tablespace.


Purpose of Database Duplication  :   The goal of database duplication is the creation of a duplicate database, which is a separate database that contains all or a subset of the data in the source database. A duplicate database is useful for a variety of purposes, most of which involve testing. We can perform the following tasks in a duplicate database:
1.) Test backup and recovery procedures .
2.) Test an upgrade to a new release of Oracle Database .
3.) Test the effect of applications on database performance .
4.) Generate reports .
5.) Export data such as a table that was inadvertently dropped from the production database, and then import it back into the production database
For example, we can duplicate the production database on host1 to host2, and then use the duplicate database on host2 to practice restoring and recovering this database while the production database on host1 operates as usual.

Here, we will follow Step-by-step to create the clone or duplicate database .


Terms used to describe the method step by step : 
Target: Database to be cloned = "noida"
Duplicate or clone  database  = "clone"
Here the source and target database instances are on a Windows server.

1.) Prepare Init.ora file for the duplicate instance
Create the pfile from spfile of the target database as below :
SQL> select name,open_mode from v$database;
NAME      OPEN_MODE
--------- ----------
NOIDA     READ WRITE
SQL> create pfile='c:\noidainit.ora' from spfile;
File created.

Now edit the noidainit file.


a.) Replace "noida" with  "clone"
b.) Add two below parameter as shown in clone parameter file.
DB_FILE_NAME_CONVERT
LOG_FILE_NAME_CONVERT 

a.) Below is the pfile of target database (noida)  :
noida.__db_cache_size=67108864
noida.__java_pool_size=12582912
noida.__large_pool_size=4194304
noida.__oracle_base='D:\oracle'#ORACLE_BASE set from environment
noida.__pga_aggregate_target=83886080
noida.__sga_target=234881024
noida.__shared_io_pool_size=0
noida.__shared_pool_size=130023424
noida.__streams_pool_size=4194304
*.audit_trail='db'
*.compatible='11.1.0.0.0'
*.control_files='D:\ORACLE\ORADATA\NOIDA\CONTROL01.CTL','D:\ORACLE\ORADATA\NOIDA\CONTROL02.CTL','D:\ORACLE\ORADATA\NOIDA\CONTROL03.CTL'#Restore Controlfile
*.db_block_size=8192
*.db_domain=''
*.db_name='noida'
*.db_recovery_file_dest='D:\oracle\flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.diagnostic_dest='D:\oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=noidaXDB)'
*.log_archive_dest_1='LOCATION=D:\archive\'
*.log_archive_format='ARC%S_%R.%T'
*.memory_target=315621376
*.open_cursors=300
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.undo_tablespace='UNDOTBS1'

Now replace "noida"  with "clone" and add two above parameter
clone.__db_cache_size=67108864
clone.__java_pool_size=12582912
clone.__large_pool_size=4194304
clone.__oracle_base='D:\oracle'#ORACLE_BASE set from environment
clone.__pga_aggregate_target=83886080
clone.__sga_target=234881024
clone.__shared_io_pool_size=0
clone.__shared_pool_size=130023424
clone.__streams_pool_size=4194304
*.audit_trail='db'
*.compatible='11.1.0.0.0'
*.control_files='D:\ORACLE\ORADATA\clone\CONTROL01.CTL','D:\ORACLE\ORADATA\clone\CONTROL02.CTL','D:\ORACLE\ORADATA\clone\CONTROL03.CTL'#Restore Controlfile
*.db_block_size=8192
*.db_domain=''
*.db_name='clone'
*.db_recovery_file_dest='D:\oracle\flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.diagnostic_dest='D:\oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=cloneXDB)'
*.log_archive_dest_1='LOCATION=D:\archive\'
*.log_archive_format='ARC%S_%R.%T'
*.memory_target=315621376
*.open_cursors=300
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.undo_tablespace='UNDOTBS1'
db_file_name_convert=('D:\oracle\oradata\noida\','D:\oracle\oradata\clone\')
log_file_name_convert=('D:\oracle\oradata\noida\','D:\oracle\oradata\clone\')


Step 2 : If using linux Platform  then create password file as below :
orapwd file=/u01/app/oracle/product/9.2.0.1.0/dbs/orapwDUP password=password entries=10


Step 3 : If using windows then Create Oracle related services (Required for Oracle on Windows only)
C:\> oradim -new -sid clone -intpwd clone -startmode m


Step 4 :  Create directories for database files
Create the required directories on the target server for datafiles, redo logs, control files, temporary files etc, this example assumes that all the database files will be stored under ‘D:\oracle\oradata\clone' and 'D:\oracle\admin\clone\'


Step 5 :  Configure listener and service name (i.e, tnsname.ora)
It is better to configure listener through net mgr or we can add the below details in listener.ora file
(SID_DESC =
      (GLOBAL_DBNAME = clone)
      (ORACLE_HOME = D:\oracle\product\11.1.0\db_1)
      (SID_NAME = clone)
    )
and perform following steps
C:\> lsnrctl
LSNRCTL> reload
 or
LSNRCTL> stop
LSNRCTL> start
LSNRCTL> exit
Now add following entries in tnsnames.ora
CLONE =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = xxxx)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = clone)
    )
  )
 or we can also use netca command to configure the tns .
check tns entries as below :
C:\> tnsping clone
TNS Ping Utility for 32-bit Windows: Version 11.1.0.6.0 - Production on 22-APR-2011 11:33:04
Copyright (c) 1997, 2007, Oracle.  All rights reserved.
Used parameter files:
D:\oracle\product\11.1.0\db_1\network\admin\sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = xxxx)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = clone)))
OK (30 msec)


Step 6 : Now Connect with Duplicate Database.
c:\> set ORACLE_SID=clone
c:\>sqlplus / as sysdba
SQL*Plus: Release 11.1.0.6.0 - Production on Fri Apr 22 10:44:14 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
Connected to an idle instance.
SQL> startup nomount pfile='C:\clone.ora';
ORACLE instance started.
Total System Global Area  318046208 bytes
Fixed Size                  1332920 bytes
Variable Size             234883400 bytes
Database Buffers           75497472 bytes
Redo Buffers                6332416 bytes
SQL> create spfile from pfile='C:\clone.ora';
SQL>shut immediate
ORA-01507: database not mounted
ORACLE instance shut down.
SQL> startup nomount
ORACLE instance started.
Total System Global Area  318046208 bytes
Fixed Size                  1332920 bytes
Variable Size             234883400 bytes
Database Buffers           75497472 bytes
Redo Buffers                6332416 bytes
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options


Step 7 : Now  Duplicate the Target Database .
C:\>rman target sys/ramtech@noida auxiliary sys/clone@clone
Recovery Manager: Release 11.1.0.6.0 - Production on Fri Apr 22 11:16:38 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
connected to target database:      NOIDA (DBID=1502483083)
connected to auxiliary database:  CLONE (not mounted)
RMAN> duplicate target database to "clone" nofilenamecheck;
Starting Duplicate Db at 22-APR-11
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=170 device type=DISK
contents of Memory Script:
{
   set until scn  1955915;
   set newname for datafile  1 to
 "D:\ORACLE\ORADATA\CLONE\SYSTEM01.DBF";
   set newname for datafile  2 to
 "D:\ORACLE\ORADATA\CLONE\SYSAUX01.DBF";
   set newname for datafile  3 to
 "D:\ORACLE\ORADATA\CLONE\UNDOTBS01.DBF";
   set newname for datafile  4 to
 "D:\ORACLE\ORADATA\CLONE\USERS01.DBF";
   set newname for datafile  5 to
 "D:\ORACLE\ORADATA\CLONE\EXAMPLE01.DBF";
   set newname for datafile  6 to
 "D:\ORACLE\ORADATA\CLONE\TRANS.DBF";
   restore
   clone database
   ;
}
executing Memory Script
executing command: SET until clause
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
Starting restore at 22-APR-11
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to D:\ORACLE\ORADATA\CLONE\SYSTEM01.DBF
channel ORA_AUX_DISK_1: restoring datafile 00002 to D:\ORACLE\ORADATA\CLONE\SYSAUX01.DBF
channel ORA_AUX_DISK_1: restoring datafile 00003 to D:\ORACLE\ORADATA\CLONE\UNDOTBS01.DBF
channel ORA_AUX_DISK_1: restoring datafile 00004 to D:\ORACLE\ORADATA\CLONE\USERS01.DBF
channel ORA_AUX_DISK_1: restoring datafile 00005 to D:\ORACLE\ORADATA\CLONE\EXAMPLE01.DBF
channel ORA_AUX_DISK_1: restoring datafile 00006 to D:\ORACLE\ORADATA\CLONE\TRANS.DBF
channel ORA_AUX_DISK_1: reading from backup piece D:\ORACLE\FLASH_RECOVERY_AREA\NOIDA\BACKUPSET\2011_04_21\O1_MF_NNNDF_TAG20110421T134444_6TZSWBRW_.BKP
channel ORA_AUX_DISK_1: piece handle=D:\ORACLE\FLASH_RECOVERY_AREA\NOIDA\BACKUPSET\2011_04_21\O1_MF_NNNDF_TAG20110421T134444_6TZSWBRW_.BKP tag=TAG20110421T134444
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:03:35
Finished restore at 22-APR-11
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "CLONE" RESETLOGS ARCHIVELOG
  MAXLOGFILES     37
  MAXLOGMEMBERS      3
  MAXDATAFILES       10
  MAXINSTANCES     1
  MAXLOGHISTORY      292
 LOGFILE
  GROUP  1 ( 'D:\ORACLE\ORADATA\CLONE\REDO01.LOG' ) SIZE 50 M  REUSE,
  GROUP  2 ( 'D:\ORACLE\ORADATA\CLONE\REDO02.LOG' ) SIZE 50 M  REUSE,
  GROUP  3 ( 'D:\ORACLE\ORADATA\CLONE\REDO03.LOG' ) SIZE 50 M  REUSE
 DATAFILE
  'D:\ORACLE\ORADATA\CLONE\SYSTEM01.DBF'
 CHARACTER SET WE8MSWIN1252
contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script
datafile 2 switched to datafile copy
input datafile copy RECID=1 STAMP=749128842 file name=D:\ORACLE\ORADATA\CLONE\SYSAUX01.DBF
datafile 3 switched to datafile copy
input datafile copy RECID=2 STAMP=749128842 file name=D:\ORACLE\ORADATA\CLONE\UNDOTBS01.DBF
datafile 4 switched to datafile copy
input datafile copy RECID=3 STAMP=749128843 file name=D:\ORACLE\ORADATA\CLONE\USERS01.DBF
datafile 5 switched to datafile copy
input datafile copy RECID=4 STAMP=749128843 file name=D:\ORACLE\ORADATA\CLONE\EXAMPLE01.DBF
datafile 6 switched to datafile copy
input datafile copy RECID=5 STAMP=749128843 file name=D:\ORACLE\ORADATA\CLONE\TRANS.DBF
contents of Memory Script:
{
   set until scn  1955915;
   recover
   clone database
    delete archivelog
   ;
}
executing Memory Script
executing command: SET until clause
Starting recover at 22-APR-11
using channel ORA_AUX_DISK_1
starting media recovery
archived log for thread 1 with sequence 47 is already on disk as file D:\ARCHIVE\ARC00047_0748802215.001
archived log for thread 1 with sequence 48 is already on disk as file D:\ARCHIVE\ARC00048_0748802215.001
archived log for thread 1 with sequence 49 is already on disk as file D:\ARCHIVE\ARC00049_0748802215.001
archived log for thread 1 with sequence 50 is already on disk as file D:\ARCHIVE\ARC00050_0748802215.001
archived log for thread 1 with sequence 51 is already on disk as file D:\ARCHIVE\ARC00051_0748802215.001
archived log for thread 1 with sequence 52 is already on disk as file D:\ARCHIVE\ARC00052_0748802215.001
archived log for thread 1 with sequence 53 is already on disk as file D:\ARCHIVE\ARC00053_0748802215.001
archived log for thread 1 with sequence 54 is already on disk as file D:\ARCHIVE\ARC00054_0748802215.001
archived log for thread 1 with sequence 55 is already on disk as file D:\ARCHIVE\ARC00055_0748802215.001
archived log for thread 1 with sequence 56 is already on disk as file D:\ARCHIVE\ARC00056_0748802215.001
archived log for thread 1 with sequence 57 is already on disk as file D:\ARCHIVE\ARC00057_0748802215.001
archived log for thread 1 with sequence 58 is already on disk as file D:\ARCHIVE\ARC00058_0748802215.001
archived log for thread 1 with sequence 59 is already on disk as file D:\ARCHIVE\ARC00059_0748802215.001
archived log file name=D:\ARCHIVE\ARC00047_0748802215.001 thread=1 sequence=47
archived log file name=D:\ARCHIVE\ARC00048_0748802215.001 thread=1 sequence=48
archived log file name=D:\ARCHIVE\ARC00049_0748802215.001 thread=1 sequence=49
archived log file name=D:\ARCHIVE\ARC00050_0748802215.001 thread=1 sequence=50
archived log file name=D:\ARCHIVE\ARC00051_0748802215.001 thread=1 sequence=51
archived log file name=D:\ARCHIVE\ARC00052_0748802215.001 thread=1 sequence=52
archived log file name=D:\ARCHIVE\ARC00053_0748802215.001 thread=1 sequence=53
archived log file name=D:\ARCHIVE\ARC00054_0748802215.001 thread=1 sequence=54
archived log file name=D:\ARCHIVE\ARC00055_0748802215.001 thread=1 sequence=55
archived log file name=D:\ARCHIVE\ARC00056_0748802215.001 thread=1 sequence=56
archived log file name=D:\ARCHIVE\ARC00057_0748802215.001 thread=1 sequence=57
archived log file name=D:\ARCHIVE\ARC00058_0748802215.001 thread=1 sequence=58
archived log file name=D:\ARCHIVE\ARC00059_0748802215.001 thread=1 sequence=59
media recovery complete, elapsed time: 00:01:35
Finished recover at 22-APR-11


contents of Memory Script:
{
   shutdown clone immediate;
   startup clone nomount ;
}
executing Memory Script
database dismounted
Oracle instance shut down
connected to auxiliary database (not started)
Oracle instance started
Total System Global Area     318046208 bytes
Fixed Size                     1332920 bytes
Variable Size                234883400 bytes
Database Buffers              75497472 bytes
Redo Buffers                   6332416 bytes
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "CLONE" RESETLOGS ARCHIVELOG
  MAXLOGFILES     37
  MAXLOGMEMBERS      3
  MAXDATAFILES       10
  MAXINSTANCES     1
  MAXLOGHISTORY      292
 LOGFILE
  GROUP  1 ( 'D:\ORACLE\ORADATA\CLONE\REDO01.LOG' ) SIZE 50 M  REUSE,
  GROUP  2 ( 'D:\ORACLE\ORADATA\CLONE\REDO02.LOG' ) SIZE 50 M  REUSE,
  GROUP  3 ( 'D:\ORACLE\ORADATA\CLONE\REDO03.LOG' ) SIZE 50 M  REUSE
 DATAFILE
  'D:\ORACLE\ORADATA\CLONE\SYSTEM01.DBF'
 CHARACTER SET WE8MSWIN1252
contents of Memory Script:
{
   set newname for tempfile  1 to
 "D:\ORACLE\ORADATA\CLONE\TEMP02.DBF";
   switch clone tempfile all;
   catalog clone datafilecopy  "D:\ORACLE\ORADATA\CLONE\SYSAUX01.DBF";
   catalog clone datafilecopy  "D:\ORACLE\ORADATA\CLONE\UNDOTBS01.DBF";
   catalog clone datafilecopy  "D:\ORACLE\ORADATA\CLONE\USERS01.DBF";
   catalog clone datafilecopy  "D:\ORACLE\ORADATA\CLONE\EXAMPLE01.DBF";
   catalog clone datafilecopy  "D:\ORACLE\ORADATA\CLONE\TRANS.DBF";
   switch clone datafile all;
}
executing Memory Script
executing command: SET NEWNAME
renamed tempfile 1 to D:\ORACLE\ORADATA\CLONE\TEMP02.DBF in control file
cataloged datafile copy
datafile copy file name=D:\ORACLE\ORADATA\CLONE\SYSAUX01.DBF RECID=1 STAMP=749128969
cataloged datafile copy
datafile copy file name=D:\ORACLE\ORADATA\CLONE\UNDOTBS01.DBF RECID=2 STAMP=749128970
cataloged datafile copy
datafile copy file name=D:\ORACLE\ORADATA\CLONE\USERS01.DBF RECID=3 STAMP=749128970
cataloged datafile copy
datafile copy file name=D:\ORACLE\ORADATA\CLONE\EXAMPLE01.DBF RECID=4 STAMP=749128970
cataloged datafile copy
datafile copy file name=D:\ORACLE\ORADATA\CLONE\TRANS.DBF RECID=5 STAMP=749128971
datafile 2 switched to datafile copy
input datafile copy RECID=1 STAMP=749128969 file name=D:\ORACLE\ORADATA\CLONE\SYSAUX01.DBF
datafile 3 switched to datafile copy
input datafile copy RECID=2 STAMP=749128970 file name=D:\ORACLE\ORADATA\CLONE\UNDOTBS01.DBF
datafile 4 switched to datafile copy
input datafile copy RECID=3 STAMP=749128970 file name=D:\ORACLE\ORADATA\CLONE\USERS01.DBF
datafile 5 switched to datafile copy
input datafile copy RECID=4 STAMP=749128970 file name=D:\ORACLE\ORADATA\CLONE\EXAMPLE01.DBF
datafile 6 switched to datafile copy
input datafile copy RECID=5 STAMP=749128971 file name=D:\ORACLE\ORADATA\CLONE\TRANS.DBF
contents of Memory Script:
{
   Alter clone database open resetlogs;
}
executing Memory Script
database opened
Finished Duplicate Db at 22-APR-11
RMAN> exit
Recovery Manager complete.

Step  8 : Check the Duplicate "clone"  Database  .
C:\>sqlplus sys/clone@clone as sysdba
SQL*Plus: Release 11.1.0.6.0 - Production on Fri Apr 22 11:43:05 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select name,open_mode from v$database;
NAME      OPEN_MODE
--------- ----------
CLONE     READ WRITE




Enjoy   J J J



Wednesday, April 20, 2011

What's New in Backup and Recovery ?

The new features of rman in oracle 11g greatly improve the manageability of Oracle Database backup and recovery. In particular, manageability is increased by the introduction of Data Recovery Advisor, better integration of RMAN with Data Guard, expansion of the recovery catalog functionality, and improved management of archived redo logs.
The new features in this release include:

1.) Data Recovery Advisor :  Data Recovery Advisor is a built-in tool to automatically diagnose data failures and recommend repairs. We can repair failures manually or request that they be repaired automatically. Data Recovery Advisor supports the LIST FAILURE, CHANGE FAILURE, ADVISE FAILURE, and REPAIR FAILURE commands.

2.) Improved integration with Data Guard  :  We can now set persistent RMAN configurations for a primary or physical standby database when RMAN is not connected as TARGET to the database. RMAN works seamlessly on all databases in the Data Guard environment, enabling us to use backups made on one database for restore and recovery on another database. The same recovery catalog can manage metadata for all primary and standby databases.

3.) Improved handling of long-term backups :  We can create a long-term or archival backup with BACKUP ... KEEP that retains only the archived log files needed to make the backup consistent.

4.) Backup failover for archived redo logs in the flash recovery area :  When backing up archived redo log files located in the flash recovery area, RMAN can fail over to archiving destinations outside the recovery area. RMAN can use an intact copy of an archived log in an alternative location to continue writing backups when a log in the recovery area is missing or corrupted.

5.) Archived log deletion policy enhancements :  When we CONFIGURE an archived log deletion policy, the configuration applies to all archiving destinations, including the flash recovery area. Both BACKUP ... DELETE INPUT and DELETE ... ARCHIVELOG obey this configuration, as does the flash recovery area. We can also CONFIGURE an archived redo log deletion policy so that logs are eligible for deletion only after being applied to or transferred to standby database destinations. We can set the policy for mandatory standby destinations only, or for any standby destinations.

6.) Network-enabled database duplication without backups  We can use the DUPLICATE command to create a duplicate database or physical standby database over the network without a need for pre-existing database backups. This form of duplication is called active database duplication.

7.) Recovery catalog enhancements :  The owner of a recovery catalog can GRANT or REVOKE access to a subset of the catalog to other database users in the same recovery catalog database. This subset is called a virtual private catalog. We can also use IMPORTCATALOG command to merge one recovery catalog (or metadata for specific databases in the catalog) into another recovery catalog.

8.) Multisection backups :  RMAN can back up a single file in parallel by dividing the work among multiple channels. Each channel backs up one file section. We create a multisection backup by specifying SECTION SIZE on the BACKUP command. Restoring a multisection backup in parallel is automatic and requires no option. We can parallelize validations of a file with VALIDATE ... SECTION SIZE.

9.)  Undo optimizationThe BACKUP command does not back up undo that is not needed for recovery of a backup. Undo is not needed if it was generated for a transaction that has already committed. This undo can represent the majority of undo in the database.

10.) Improved block media recovery performance  :  When performing block media recovery, RMAN automatically searches the flashback logs, if they are available, for the required blocks before searching backups. Using blocks from the flashback logs can significantly improve block media recovery performance.

11.) Improved block corruption detection  :  Several database components and utilities, including RMAN, can now detect a corrupt block and record it in V$DATABASE_BLOCK_CORRUPTION. When instance recovery detects a corrupt block, it records it in this view automatically. Oracle Database automatically updates this view when block corruptions are detected or repaired. The VALIDATE command is enhanced with many new options such as VALIDATE ... BLOCK and VALIDATE DATABASE.

12.) Faster backup compression  :  In addition to the existing BZIP2 algorithm for binary compression of backups, RMAN also supports the ZLIB algorithm. ZLIB runs faster than BZIP2, but produces larger files. ZLIB requires the Oracle Advanced Compression option. We can use the CONFIGURE COMPRESSION ALGORITHM command to choose between BZIP2(default) and ZLIB for RMAN backups.

13.) Block change tracking support for standby databases We can enable block change tracking on a physical standby database. When We back up the standby database, RMAN can use the block change tracking file to quickly identify the blocks that changed since the last incremental backup.

14.) Improved scripting with RMAN substitution variables  : We can create RMAN command files and stored scripts that accept user input at runtime. Thus, backup scripts can use RMAN substitution variables for tags, filenames, restore point names, and so on.

15.) Integration with VSS-enabled applications  : The Oracle VSS writer is integrated with applications that use the Volume Shadow Copy Service (VSS) infrastructure on Windows. We can use VSS-enabled software and storage systems to back up and restore an Oracle database. A key benefit is the ability to make a shadow copy of an open database.

16.) Lost write detection  We can enable the DB_LOST_WRITE_PROTECT initialization parameter to detect a lost write during managed recovery of a standby database or media recovery of a primary database. Lost write detection is disabled by default.

17.) Backup of read-only transportable tablespaces   :  In previous releases, RMAN could not back up transportable tablespaces until they were made read/write at the destination database. Now RMAN can back up transportable tablespaces when they are not read/write and restore these backups.

18.) Backup and recovery enhancements in Oracle Enterprise Manager : Enterprise Manager includes an interface for Data Recovery Advisor.

19.) Oracle Flashback Transaction : We can reverse a transaction. Oracle Database determines the dependencies between transactions and in effect creates a compensating transaction that reverses the unwanted changes. The database rewinds to a state as if the transaction, and any transactions that could be dependent on it, never occurred.

20.) Flashback data archive A flashback data archive enables the database to automatically track and store all transactional changes to a table for the duration of its lifetime. Thus, we do not need to build this functionality into database applications. A flashback data archive is especially useful for compliance, audit reports, data analysis, and DSS (Decision Support Systems). We can use some of the logical flashback features with a flashback data archive to access data from far in the past.

21.) Improved media recovery performance for databases on SMP systems  : Media recovery of databases on symmetric multiprocessing (SMP) computers is now faster. The performance improvements include the following:
  • More parallelism
  • More efficient asynchronous redo read, parse, and apply
  • Fewer synchronization points in the parallel apply algorithm
  • The media recovery checkpoint at a redo log boundary no longer blocks the apply of the next log.
No configuration is necessary, although we can use new parallel recovery wait events for tuning if the default apply rate is not satisfactory.
For more details of above  points click here




ENJOY    J J J