Wednesday, April 20, 2011

What's New in Backup and Recovery ?

The new features of rman in oracle 11g greatly improve the manageability of Oracle Database backup and recovery. In particular, manageability is increased by the introduction of Data Recovery Advisor, better integration of RMAN with Data Guard, expansion of the recovery catalog functionality, and improved management of archived redo logs.
The new features in this release include:

1.) Data Recovery Advisor :  Data Recovery Advisor is a built-in tool to automatically diagnose data failures and recommend repairs. We can repair failures manually or request that they be repaired automatically. Data Recovery Advisor supports the LIST FAILURE, CHANGE FAILURE, ADVISE FAILURE, and REPAIR FAILURE commands.

2.) Improved integration with Data Guard  :  We can now set persistent RMAN configurations for a primary or physical standby database when RMAN is not connected as TARGET to the database. RMAN works seamlessly on all databases in the Data Guard environment, enabling us to use backups made on one database for restore and recovery on another database. The same recovery catalog can manage metadata for all primary and standby databases.

3.) Improved handling of long-term backups :  We can create a long-term or archival backup with BACKUP ... KEEP that retains only the archived log files needed to make the backup consistent.

4.) Backup failover for archived redo logs in the flash recovery area :  When backing up archived redo log files located in the flash recovery area, RMAN can fail over to archiving destinations outside the recovery area. RMAN can use an intact copy of an archived log in an alternative location to continue writing backups when a log in the recovery area is missing or corrupted.

5.) Archived log deletion policy enhancements :  When we CONFIGURE an archived log deletion policy, the configuration applies to all archiving destinations, including the flash recovery area. Both BACKUP ... DELETE INPUT and DELETE ... ARCHIVELOG obey this configuration, as does the flash recovery area. We can also CONFIGURE an archived redo log deletion policy so that logs are eligible for deletion only after being applied to or transferred to standby database destinations. We can set the policy for mandatory standby destinations only, or for any standby destinations.

6.) Network-enabled database duplication without backups  We can use the DUPLICATE command to create a duplicate database or physical standby database over the network without a need for pre-existing database backups. This form of duplication is called active database duplication.

7.) Recovery catalog enhancements :  The owner of a recovery catalog can GRANT or REVOKE access to a subset of the catalog to other database users in the same recovery catalog database. This subset is called a virtual private catalog. We can also use IMPORTCATALOG command to merge one recovery catalog (or metadata for specific databases in the catalog) into another recovery catalog.

8.) Multisection backups :  RMAN can back up a single file in parallel by dividing the work among multiple channels. Each channel backs up one file section. We create a multisection backup by specifying SECTION SIZE on the BACKUP command. Restoring a multisection backup in parallel is automatic and requires no option. We can parallelize validations of a file with VALIDATE ... SECTION SIZE.

9.)  Undo optimizationThe BACKUP command does not back up undo that is not needed for recovery of a backup. Undo is not needed if it was generated for a transaction that has already committed. This undo can represent the majority of undo in the database.

10.) Improved block media recovery performance  :  When performing block media recovery, RMAN automatically searches the flashback logs, if they are available, for the required blocks before searching backups. Using blocks from the flashback logs can significantly improve block media recovery performance.

11.) Improved block corruption detection  :  Several database components and utilities, including RMAN, can now detect a corrupt block and record it in V$DATABASE_BLOCK_CORRUPTION. When instance recovery detects a corrupt block, it records it in this view automatically. Oracle Database automatically updates this view when block corruptions are detected or repaired. The VALIDATE command is enhanced with many new options such as VALIDATE ... BLOCK and VALIDATE DATABASE.

12.) Faster backup compression  :  In addition to the existing BZIP2 algorithm for binary compression of backups, RMAN also supports the ZLIB algorithm. ZLIB runs faster than BZIP2, but produces larger files. ZLIB requires the Oracle Advanced Compression option. We can use the CONFIGURE COMPRESSION ALGORITHM command to choose between BZIP2(default) and ZLIB for RMAN backups.

13.) Block change tracking support for standby databases We can enable block change tracking on a physical standby database. When We back up the standby database, RMAN can use the block change tracking file to quickly identify the blocks that changed since the last incremental backup.

14.) Improved scripting with RMAN substitution variables  : We can create RMAN command files and stored scripts that accept user input at runtime. Thus, backup scripts can use RMAN substitution variables for tags, filenames, restore point names, and so on.

15.) Integration with VSS-enabled applications  : The Oracle VSS writer is integrated with applications that use the Volume Shadow Copy Service (VSS) infrastructure on Windows. We can use VSS-enabled software and storage systems to back up and restore an Oracle database. A key benefit is the ability to make a shadow copy of an open database.

16.) Lost write detection  We can enable the DB_LOST_WRITE_PROTECT initialization parameter to detect a lost write during managed recovery of a standby database or media recovery of a primary database. Lost write detection is disabled by default.

17.) Backup of read-only transportable tablespaces   :  In previous releases, RMAN could not back up transportable tablespaces until they were made read/write at the destination database. Now RMAN can back up transportable tablespaces when they are not read/write and restore these backups.

18.) Backup and recovery enhancements in Oracle Enterprise Manager : Enterprise Manager includes an interface for Data Recovery Advisor.

19.) Oracle Flashback Transaction : We can reverse a transaction. Oracle Database determines the dependencies between transactions and in effect creates a compensating transaction that reverses the unwanted changes. The database rewinds to a state as if the transaction, and any transactions that could be dependent on it, never occurred.

20.) Flashback data archive A flashback data archive enables the database to automatically track and store all transactional changes to a table for the duration of its lifetime. Thus, we do not need to build this functionality into database applications. A flashback data archive is especially useful for compliance, audit reports, data analysis, and DSS (Decision Support Systems). We can use some of the logical flashback features with a flashback data archive to access data from far in the past.

21.) Improved media recovery performance for databases on SMP systems  : Media recovery of databases on symmetric multiprocessing (SMP) computers is now faster. The performance improvements include the following:
  • More parallelism
  • More efficient asynchronous redo read, parse, and apply
  • Fewer synchronization points in the parallel apply algorithm
  • The media recovery checkpoint at a redo log boundary no longer blocks the apply of the next log.
No configuration is necessary, although we can use new parallel recovery wait events for tuning if the default apply rate is not satisfactory.
For more details of above  points click here




ENJOY    J J J



Manually Creating an Oracle 11g Database

Creating manually oracle 11g database is just the same as 10g . Here are steps to create the oracle Database manually .

Step 1:  Create initialization parameter file name 
Create initialization parameter file name as initdelhi.ora .Below are the contents of my initdelhi.ora:-

audit_file_dest='D:\oracle\admin\delhi\adump'
compatible='11.1.0.0.0'
control_files='D:\ORACLE\ORADATA\DELHI\CONTROL01.CTL'
db_block_size=8192
db_name='delhi'
diagnostic_dest='D:\oracle'
remote_login_passwordfile='EXCLUSIVE'
undo_tablespace='UNDOTBS1'

Step 2 :  Create Oracle service

C:\>  oradim    -new    -sid    delhi    -startmode  m     -intpwd    oracle 
 Instance created.

Step 3: Create the folder 
Create the folder as specified location in pfile (i.e in oradata and admin folder) .

Step 4Create Database

C:\>set ORACLE_SID=delhi
C:\>sqlplus / as sysdba
SQL*Plus: Release 11.1.0.6.0 - Production on Wed Apr 20 12:43:03 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> startup nomount pfile='D:\oracle\admin\delhi\pfile\initdelhi.ora';
ORACLE instance started.
Total System Global Area  150667264 bytes
Fixed Size                  1331740 bytes
Variable Size              92278244 bytes
Database Buffers           50331648 bytes
Redo Buffers                6725632 bytes

SQL>  create database
  2  datafile 'D:\oracle\oradata\delhi\system01.dbf' size 1G
  3  sysaux datafile 'D:\oracle\oradata\delhi\sysaux.dbf' size 800M
  4  UNDO TABLESPACE "UNDOTBS1"
  5  DATAFILE 'D:\oracle\oradata\delhi\UNDOTBS01.DBF' size 200m
  6  LOGFILE group 1 'D:\oracle\oradata\delhi\REDO01.LOG' SIZE 100M,
  7  group 2'D:\oracle\oradata\delhi\REDO02.LOG' SIZE 100M,
  8  group 3'D:\oracle\oradata\delhi\REDO03.LOG' SIZE 100M;

Database created.

Step 5 : Run the following scripts

C:\>@ D:\oracle\product\11.1.0\db_1\RDBMS\ADMIN\catalog.sql
C:\>@
D:\oracle\product\11.1.0\db_1\RDBMS\ADMIN\catproc.sql

SQL> select name,open_mode from v$database;
NAME      OPEN_MODE
--------- ----------
DELHI     READ WRITE
SQL> select program from v$session;
PROGRAM
sqlplus.exe
ORACLE.EXE (q000)
ORACLE.EXE (QMNC)
ORACLE.EXE (W000)
ORACLE.EXE (FBDA)
ORACLE.EXE (SMCO)
ORACLE.EXE (DBRM)
ORACLE.EXE (PMON)
ORACLE.EXE (DIAG)
ORACLE.EXE (DBW0)
ORACLE.EXE (CKPT)
ORACLE.EXE (SMON)
ORACLE.EXE (RECO)
ORACLE.EXE (MMON)
ORACLE.EXE (MMNL)
ORACLE.EXE (LGWR)
ORACLE.EXE (VKTM)
ORACLE.EXE (DIA0)
ORACLE.EXE (MMAN)
ORACLE.EXE (PSP0)
sqlplus.exe
21 rows selected.



Enjoy         J J J


Table_exists_action Parameter of Data Pump

Sometimes it happens that we need to import table into an existing table.If we import the table in that schemas it throws error regarding the existence of the particular table.If we have to preserve the old data of table and append the new data,we can use the table_exists_action parameter of data pump.The valid key words are {SKIP | APPEND | TRUNCATE | REPLACE}.

The possible values of the following effects are :

1.) SKIP  : leaves the table as is and moves on to the next object. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.By default the value is SKIP .

2.) APPEND loads rows from the source and leaves existing rows unchanged.

3.) TRUNCATE deletes existing rows and then loads rows from the source.

4.) REPLACE drops the existing table and then creates and loads it from the source. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.

Here is a  DEMO of the TABLE_EXISTS_ACTION parameter  :

First of all we will take the export table (say test ) which is in neer schemas.

C:\>expdp system/xxxx@noida directory=dpump tables=neer.test dumpfile=neer_test.dmp logfile=exp_neerlog.log
Export: Release 11.1.0.6.0 - Production on Tuesday, 19 April, 2011 13:21:28
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/********@noida directory=dpump tables=neer.test dumpfile=neer_test.dmp logfile=exp_neerlog.log
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "NEER"."TEST"                               5.062 KB       9 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  D:\DPUMP\NEER_TEST.DMP
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 13:23:13

Now we consider each of the valid keywords of  action_exists_append parameter.

Case 1 : action_exists_append=skip (by defaults)

C:\>impdp system/xxxx@noida directory=dpump full=y  dumpfile=neer_test.dmp logfile=imp_neerlog.log
Import: Release 11.1.0.6.0 - Production on Tuesday, 19 April, 2011 13:32:22
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@noida directory=dpump full=y dumpfile=neer_test.dmp logfile=imp_neerlog.log
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39151: Table "NEER"."TEST" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 13:32:35

Hence, above results shows that the table is skipped .

Case 2 : table_exists_action=append  
Now we delete the table test and recreate populate it values.

SQL> drop table test;
Table dropped.
SQL> create table test (id number);
Table created.

SQL> insert into test values (&Y);
Enter value for y: 111
old   1: insert into test values (&Y)
new   1: insert into test values (123)
1 row created.

SQL> /
Enter value for y: 222
old   1: insert into test values (&Y)
new   1: insert into test values (234)
1 row created.

SQL> /
Enter value for y: 333
old   1: insert into test values (&Y)
new   1: insert into test values (345)
1 row created.

SQL> commit;
Commit complete.

SQL> select * from test;
        ID
----------
       111
       222
       333

Now we will import the dump in neer schemas having table "test"

SQL>HOST impdp system/xxxx@noida directory=dpump dumpfile=NEER_TEST.dmp table_exists_action=append
Import: Release 11.1.0.6.0 - Production on Tuesday, 19 April, 2011 14:22:39
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@noida directory=dpump dumpfile=NEER_TEST.dmp table_exists_action=append
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39152: Table "NEER"."TEST" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
Processing     object type         TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "NEER"."TEST"                               5.062 KB       9 rows
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 14:22:58

SQL> select * from test;
        ID
----------
       111
       222
       333
        11
        22
        33
        44
        55
        66
        77
        88

        ID
----------
        99
12 rows selected.

Hence we find that the imported table appends in existing table . This parameter only import the data of the tables and skips the indexes  .

Case 3 :  table_exists_action=truncate
we have already 12 rows in table "test" .Now we again import the dump having 9 rows.

SQL>host impdp system/xxxx@noida directory=dpump dumpfile=NEER_TEST.dmp table_exists_action=truncate
Import: Release 11.1.0.6.0 - Production on Tuesday, 19 April, 2011 14:26:35
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@noida directory=dpump dumpfile=NEER_TEST.dmp table_exists_action=truncate
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39153: Table "NEER"."TEST" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "NEER"."TEST"                               5.062 KB       9 rows
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 14:26:51

SQL> select * from test;
        ID
----------
        11
        22
        33
        44
        55
        66
        77
        88
        99
9 rows selected.

Case 4  : table_exists_action= replace 
Now we will add few rows in table "test" to check the results.

SQL> insert into test values(1234);
1 row created.
SQL> insert into test values(12345);
1 row created.
SQL> insert into test values(34567);
1 row created.
SQL> commit;
Commit complete.

SQL> select * from test;
        ID
----------
        11
        22
        33
        44
        55
        66
        77
        88
        99
      1234
     12345
     34567
12 rows selected.

SQL>host impdp system/xxxx@noida directory=dpump dumpfile=NEER_TEST.dmp table_exists_action=replace
Import: Release 11.1.0.6.0 - Production on Tuesday, 19 April, 2011 14:33:23
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@noida directory=dpump dumpfile=NEER_TEST.dmp table_exists_action=replace
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "NEER"."TEST"                               5.062 KB       9 rows
Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at 14:33:42

SQL> select  *  from test ; 
        ID
----------
        11
        22
        33
        44
        55
        66
        77
        88
        99
9 rows selected.

Hence, table_exists_action=replace parameter internally drop and recreate the table .Hence all the existing metadata also  get dropped and is recreated .
Note: Parameter table_exists_action=replace for a job with no metadata will not get imported .



Enjoy   J J J



Tuesday, April 19, 2011

ORA-25153: Temporary Tablespace is Empty


Temporary tablespaces are used to manage space for database sort operations and for storing global temporary tables. Once while exporting table through Data Pump ora-25153 occurs.

C:\>expdp system/ramtech@noida tables=neer.test dumpfile=neer_test.dmp logfile=exp_neerlog.log
Export: Release 11.1.0.6.0 - Production on Tuesday, 19 April, 2011 12:47:37
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-25153: Temporary Tablespace is Empty
ORA-06512: at "SYS.DBMS_LOB", line 555
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3487
ORA-06512: at line 1

Cause :: The cause for the ORA-25153 error is because attempt was made to use space in a temporary tablespace with no files (no datafiles defined).

Solution  :: To solve this problem, i just added a temp file (datafiles) to the TEMP tablespace by using ADD TEMPFILE command,

SQL> alter tablespace temp add tempfile 'D:\oracle\oradata\noida\temp02.dbf' size 50m;
Tablespace altered.

If we check and found that TEMP tablespace already has data files, check the default temporary tablespace for all users and your database and set the default temporary tablespace to a valid temporarary tablespace.

To check the default temporary tablespace of the database:
SQL> select property_name, property_value from database_properties;


Performance Tuning of Data Pump

The Data Pump utilities are designed especially for very large databases. If our site has very large quantities of data versus metadata, we should experience a dramatic increase in performance compared to the original Export and Import utilities. 

The Data Pump Export and Import utilities enable us to dynamically increase and decrease resource consumption for each job. This is done using the PARALLEL parameter to specify a degree of parallelism for the job. (The PARALLEL parameter is the only tuning parameter that is specific to Data Pump.) For maximum throughput, do not set PARALLEL to much more than 2x the CPU count.

There are few initialization parameters that affect Data Pump performance. Setting  certain initialization parameters can affect the performance of Data Pump Export and Import. In particular, we can try using the following settings to improve performance, although the effect may not be the same on all platforms.

DISK_ASYNCH_IO=TRUE
DB_BLOCK_CHECKING=FALSE
DB_BLOCK_CHECKSUM=FALSE

Additionally, the following initialization parameters must have values set high enough to allow for maximum parallelism:

PROCESSES
SESSIONS
PARALLEL_MAX_SERVERS

To check the performance by setting one of the parameter say of parallel. Click here