Tuesday, April 19, 2011

Loss Of Control-file in various Scenario's

Control file is a small binary file that is part of an Oracle database. The control file is used to keep       track of the database's status and physical structure. The control file is absolutely crucial to database operation . Here , we will discuss the various scenario's when control file(s) get lost or corrupt.

CASE 1 : If one of the controlfile get lost or corrupted 
when the database is shut down and on  startup we get the following error due to loss of controlfile.

C:\>sqlplus sys/xxxx@noida as sysdba
SQL*Plus: Release 11.1.0.6.0 - Production on Mon Apr 18 15:41:33 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> startup
ORACLE instance started.
Total System Global Area  318046208 bytes
Fixed Size                  1332920 bytes
Variable Size             239077704 bytes
Database Buffers           71303168 bytes
Redo Buffers                6332416 bytes
ORA-00205: error in identifying control file, check alert log for more info 

Checked the Alert log file and the following information are in  the alert log file.
ALTER DATABASE   MOUNT
Mon Apr 18 15:42:12 2011
ORA-00210: cannot open the specified control file
ORA-00202: control file: 'D:\ORACLE\ORADATA\NOIDA\CONTROL02.CTL'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
Mon Apr 18 15:42:14 2011
Checker run found 1 new persistent data failures
ORA-205 signalled during: ALTER DATABASE   MOUNT

To solve this issue, copy one of the existing control file (say control01.ctl or control03.ctl ) and paste it where the missing  control file was earlier residing and rename the controlfile which one is missing, as in above example, control file (CONTROL02.CTL) is missing and then following the below steps:

SQL> alter database mount;
Database altered.

SQL> alter database open;
Database altered.

SQL> select name,open_mode from v$database ;
NAME          OPEN_MODE
---------           ----------
NOIDA          READ WRITE

CASE  2:  When all the controlfile are lost 
If  we  have  valid  backup and  if  all  the  control files  are  lost  then  we  can  recover  the control  files from autobackup of controlfile or by specifying the location of autobackup control file.

C:\>sqlplus sys/xxxx@noida as sysdba
SQL*Plus: Release 11.1.0.6.0 - Production on Mon Apr 18 16:21:55 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> startup nomount
ORACLE instance started.
Total System Global Area  318046208 bytes
Fixed Size                  1332920 bytes
Variable Size             272632136 bytes
Database Buffers           37748736 bytes
Redo Buffers                6332416 bytes

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

C:\>rman target sys/xxxx@noida

Recovery Manager: Release 11.1.0.6.0 - Production on Mon Apr 18 16:33:34 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
connected to target database: NOIDA (not mounted)

RMAN> restore controlfile from 'D:\orcl_bkp\cf\C-1502483083-20110418-01';  (location of controlfile)
Starting restore at 18-APR-11
using channel ORA_DISK_1
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
output file name=D:\ORACLE\ORADATA\NOIDA\CONTROL01.CTL
output file name=D:\ORACLE\ORADATA\NOIDA\CONTROL02.CTL
output file name=D:\ORACLE\ORADATA\NOIDA\CONTROL03.CTL
Finished restore at 18-APR-11

RMAN> alter database mount;
database mounted
released channel: ORA_DISK_1

RMAN> recover database;
Starting recover at 18-APR-11
Starting implicit crosscheck backup at 18-APR-11
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=153 device type=DISK
Crosschecked 7 objects
Finished implicit crosscheck backup at 18-APR-11
Starting implicit crosscheck copy at 18-APR-11
using channel ORA_DISK_1
Finished implicit crosscheck copy at 18-APR-11
searching for all files in the recovery area
cataloging files...
no files cataloged
using channel ORA_DISK_1
starting media recovery
archived log for thread 1 with sequence 19 is already on disk as file D:\ORACLE\ORADATA\NOIDA\RE
archived log file name=D:\ORACLE\ORADATA\NOIDA\REDO01.LOG thread=1 sequence=19
media recovery complete, elapsed time: 00:00:05
Finished recover at 18-APR-11

RMAN> alter database open resetlogs;
database opened

CASE 3 :   When we donot have any backup and and all control files are lost or corrupted

SQL> startup nomount
ORACLE instance started.
Total System Global Area  318046208 bytes
Fixed Size                  1332920 bytes
Variable Size             281020744 bytes
Database Buffers           29360128 bytes
Redo Buffers                6332416 bytes

Now we create the controlfile manually on command prompt 

SQL> CREATE CONTROLFILE REUSE DATABASE  "NOIDA"   NORESETLOGS archivelog
MAXLOGFILES 5
MAXLOGMEMBERS 3
MAXDATAFILES 10
MAXINSTANCES 1
MAXLOGHISTORY 113
LOGFILE
GROUP 1 'D:\oracle\oradata\noida\REDO01.LOG' SIZE 50M,
GROUP 2 'D:\oracle\oradata\noida\REDO02.LOG' SIZE 50M,
GROUP 3 'D:\oracle\oradata\noida\REDO03.LOG' SIZE 50M
DATAFILE
'D:\oracle\oradata\noida\SYSTEM01.DBF' ,
'D:\oracle\oradata\noida\USERS01.DBF' ,
'D:\oracle\oradata\noida\EXAMPLE01.DBF' ,
'D:\oracle\oradata\noida\SYSAUX01.DBF' ,
'D:\oracle\oradata\noida\TRANS.DBF' ,
'D:\oracle\oradata\noida\UNDOTBS01.DBF'   ;
Control file created.

SQL> archive log list
Database log mode                              Archive Mode
Automatic archival                               Disabled
Archive destination                              D:\archive\
Oldest online log sequence                  1
Next log sequence to archive              1
Current log sequence                          1

SQL> select first_change# ,group# from v$log;
FIRST_CHANGE#     GROUP#
------------- ----------
      1313491          1
            0                3
            0               2

SQL> alter database open;

SQL> select name,open_mode from v$database;
NAME      OPEN_MODE
---------     ----------
NOIDA     READ WRITE


Enjoy   J J J



Monday, April 18, 2011

Create Control file Manually. When and How ?

The control files of a database store the status of the physical structure of the database. The control file is absolutely crucial to database operation .
Control File contains

> Database information (RESETLOGS SCN and their time stamp)
> Archive log history
> Tablespace and datafile records
(filenames, datafile checkpoints, read/write status, offline or not)
> Redo Logs (current online redo log)
> Database’s creation date
> database name
> current archive log mode
> Log records (sequence numbers, SCN range in each log)
> RMAN catalog
> Database block corruption information
> Database ID, which is unique to each DB

If the controlfile is lost, it is somewhat difficult to do a recovery because the database cannot be mounted for a recovery. The controlfile must be recreated. So We can Manually create a new control file for a database using the CREATE CONTROLFILE statement. The following statement creates a new control file for the  database (a database that formerly used a different database name) .

When to Create New Control Files :
It is necessary for us to create new control files in the following situations:

1.) All control files for the database have been permanently damaged and we do not have a control file backup.
2.) We want to change the database name. For example, we would change a database name if it conflicted with another database name in a distributed environment.
3.) The compatibility level is set to a value that is earlier than 10g, and we must make a change to an area of database configuration that relates to any of the following parameters from the CREATE DATABASE or CREATE CONTROLFILE commands: MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, and MAXINSTANCES. If compatibility is 10g or later, we do not have to create new control files when we make such a change; the control files automatically expand, if necessary, to accommodate the new configuration information.

For example, assume that when we created the database or recreated the control files, we set MAXLOGFILES to 3. Suppose that now we want to add a fourth redo log file group to the database with the ALTER DATABASE command. If compatibility is set to 10g or later, we can do so and the controlfiles automatically expand to accommodate the new logfile information. However, with compatibility set earlier than 10g, our ALTER DATABASE command would generate an error, and we would have to first create new control files .

Command to Create Controlfile Manually 

C:\>sqlplus sys/ramtech@noida as sysdba
SQL*Plus: Release 11.1.0.6.0 - Production on Mon Apr 18 17:31:50 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> STARTUP NOMOUNT
SQL> CREATE CONTROLFILE REUSE DATABASE  "NOIDA"
NORESETLOGS archivelog
MAXLOGFILES 5 
MAXLOGMEMBERS 3 
MAXDATAFILES 10 
MAXINSTANCES 1 
MAXLOGHISTORY 113
LOGFILE 
GROUP 1 'D:\oracle\oradata\noida\REDO01.LOG' SIZE 50M,
GROUP 2 'D:\oracle\oradata\noida\REDO02.LOG' SIZE 50M,
GROUP 3 'D:\oracle\oradata\noida\REDO03.LOG' SIZE 50M
DATAFILE 
'D:\oracle\oradata\noida\SYSTEM01.DBF' , 
'D:\oracle\oradata\noida\USERS01.DBF' , 
'D:\oracle\oradata\noida\EXAMPLE01.DBF' , 
'D:\oracle\oradata\noida\SYSAUX01.DBF' ,
'D:\oracle\oradata\noida\TRANS.DBF' ,
'D:\oracle\oradata\noida\UNDOTBS01.DBF' ;

Specify RESETLOGS if we want Oracle to ignore the contents of the files listed in the LOGFILE clause. The log files do not have to exist but each redo_log_file_spec in the LOGFILE clause must specify the SIZE parameter. Oracle will assign all online redo log file groups to thread 1 and will enable this thread for public use by any instance. We must then open the database using ALTER DATABASE RESETLOGS.

NORESETLOGS will use all files in the LOGFILE clause as they were when the database was last open. These files must exist and must be the current online redo log files rather than restored backups.Oracle will reassign the redo log file groups to re-enabled threads as previously assigned.


Enjoy       J J J 


How to Determine the Name of the Trace File to be Generated


In many cases we need to find out the name of the latest trace file generated in the USER_DUMP_DEST directory. What we usually do is, that we physically go to the USER_DUMP_DEST location with the operating system browser and sort all the files by date and look for latest files. We can remove this hassle easily if we know what would be the trace file name in advance. Let's have a look ...

Demo : 1 
C:\>sqlplus sys/xxxx@noida as sysdba
SQL*Plus: Release 11.1.0.6.0 - Production on Mon Apr 18 17:44:49 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> alter database backup controlfile to trace;
Database altered.

The above Command will generate the trace file inside USER_DUMP_DEST. Let's check the location of USER_DUMP_DESTIf we are using Sql*plus then issue,

SQL> show parameter user_dump_dest
NAME                         TYPE           VALUE
--------------                --------     -----------------------------------------------------
user_dump_dest        string       d:\oracle\diag\rdbms\noida\noida\trace

Here the latest files are for latest trace . Sometimes, we may not get the right trace file .Now it would be quite easy task if we knew the name of the trace file to be generated by ALTER DATABASE command. In advance we can get the trace file name as 

SQL> SELECT s.sid, s.serial#, pa.value || '\' || LOWER(SYS_CONTEXT('userenv','instance_name')) ||
          '_ora_' || p.spid || '.trc'  AS trace_file        FROM   v$session s,  v$process p, v$parameter pa
   WHERE  pa.name = 'user_dump_dest'     AND    s.paddr = p.addr
   AND    s.audsid = SYS_CONTEXT('USERENV', 'SESSIONID');
   SID            SERIAL#                     TRACE_FILE
---------        ----------           -----------------------------------------------------------------------
 110             312                  d:\oracle\diag\rdbms\noida\noida\trace\noida_ora_3552.trc
  
the trace file to be generated now will be named as noida_ora_3552.trc . So now issuing, "alter database backup controlfile to trace" will generate the file named d:\oracle\diag\rdbms\noida\noida\trace\noida_ora_3552.trc

Demo : 2 
This method is much simple and easy to identify the trace file. Let's have a look on another demo .

C:\>sqlplus sys/xxxx@noida as sysdba
SQL*Plus: Release 11.1.0.6.0 - Production on Mon Apr 18 17:49:49 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> show   parameter    user_dump_dest
NAME                             TYPE                    VALUE
---------------------           --------       ----------------------------------------
user_dump_dest          string           d:\oracle\diag\rdbms\noida\noida\trace

SQL> alter session set tracefile_identifier='mytracefile' ;
Session altered.
SQL>  alter database backup controlfile to trace; 
Session altered.


Now, go to the user_dump_dest location and find the trace file having name "mytracefile" . In mycase the name is   "noida_ora_3552_mytracefile.trc"

The difference between the two demo is that first demo is on system level so it will give all the trace file generated by different session whereas in second case , it will show  the trace file for particular session only . The another difference between is that in first demo we have to fire the command and then check the tracefile but in second demo we have to set the trace file name so that we can easily identify the correct trace file .


Enjoy      :-)


Saturday, April 16, 2011

How To Determine the DBID ?

The DBID is a unique identifier. It is found in all datafile headers. The DBID is used to identifiy the database a file belongs to.  There may be situations where we need the recovery of the spfile or control file from autobackup, such as disaster recovery when we have lost all database files , then in such case  we will need  to determine the DBID to restore the database . If we do not have a record of the DBID of database, there are two places from where we can easily find it.

1.) The DBID is used in forming the filename for the control file autobackup. Below is the name of my autobackup controlfile name    ==  C-1502483083-20110416-00

Here 1502483083 specifies  the DBID   and 20110416  specifies the date i.e, 16th april 2011

2.) If we have any text files that preserve the output from an RMAN session, the DBID is displayed by the RMAN client when it starts up and connects to your database. Typical output follows:


C:\>rman target sys/xxxx@noida
Recovery Manager: Release 11.1.0.6.0 - Production on Sat Apr 16 17:23:28 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
connected to target database: NOIDA (DBID=1502483083)


Enjoy      J J J


Flashback_Scn and flashback_Time parameter of Data Pump

Flashback_Scn  and  flashback_Time  are  two  important  feature  of  the  datapump 11g . If  we  want  to  run  a  large  export  whilst  the  database  is  in  use  then  ideally  we  should  always use  one  of  the  two  flashback  parameters. The export  operation  is  performed  with  data  that is  consistent  as  of  the  specified  SCN .  FLASHBACK_SCN and FLASHBACK_TIME are mutually exclusive .

FLASHBACK_TIME : The SCN that most closely matches the specified time is found, and this SCN is used to enable the Flashback utility. The export operation is performed with data that is consistent as of this SCN. The FLASHBACK_SCN parameter pertains only to the Flashback Query capability of Oracle Database. It is not applicable to Flashback Database, Flashback Drop, or Flashback Data Archive. We can get the scn number from the following query :

SQL> select current_scn from v$database ;       or
SQL>select dbms_flashback.get_system_change_number from dual ; 

Let's have a Demo of the flashback_scn

SQL> select current_scn from v$database;
CURRENT_SCN
-----------
    1140271

SQL> create table hr.test as select * from test;
Table created.
SQL> select current_scn from v$database;
CURRENT_SCN
-----------
    1140487
Let's take a export using flashback_scn  parameter

C:\>expdp system/ramtech@noida directory=dpump schemas=hr dumpfile=flashback_hr.dmp logfile=flashlog.log       flashback_scn=1140271
Export: Release 11.1.0.6.0 - Production on Saturday, 16 April, 2011 11:35:45
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  system/********@noida directory=dpump schemas=hr dumpfile=flashback_hr.dmp logfile=flashlog.log    flashback_scn=1140271
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 512 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
. . exported "HR"."COUNTRIES"                            6.375 KB      25 rows
. . exported "HR"."DEPARTMENTS"                          7.015 KB      27 rows
. . exported "HR"."EMPLOYEES"                            16.80 KB     107 rows
. . exported "HR"."JOBS"                                 6.984 KB      19 rows
. . exported "HR"."JOB_HISTORY"                          7.054 KB      10 rows
. . exported "HR"."LOCATIONS"                            8.273 KB      23 rows
. . exported "HR"."REGIONS"                              5.484 KB       4 rows
ORA-31693: Table data object "HR"."TEST" failed to load/unload and is being skipped due to error:
ORA-02354: error in exporting/importing data
ORA-01466: unable to read data - table definition has changed
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
  D:\DPUMP\FLASHBACK_HR.DMP
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" completed with 1 error(s) at 11:37:50

The above error show that the table "test"  is not include in the  export operation because the SCN mention  is of before the table "test" creation. The below export will show the export upto current SCN when database is in use.

C:\>expdp system/ramtech@noida directory=dpump schemas=hr dumpfile=flashback_hr1.dmp  logfile=flashback_log.log
Export: Release 11.1.0.6.0 - Production on Saturday, 16 April, 2011 11:44:50
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  system/********@noida directory=dpump schemas=hr dumpfile=flashback_hr1.dmp logfile=flashback_log.log
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 512 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
. . exported "HR"."COUNTRIES"                                6.375 KB      25 rows
. . exported "HR"."DEPARTMENTS"                          7.015 KB      27 rows
. . exported "HR"."EMPLOYEES"                             16.80 KB     107 rows
. . exported "HR"."JOBS"                                             6.984 KB      19 rows
. . exported "HR"."JOB_HISTORY"                             7.054 KB      10 rows
. . exported "HR"."LOCATIONS"                                8.273 KB      23 rows
. . exported "HR"."REGIONS"                                      5.484 KB       4 rows
. . exported "HR"."TEST"                                              5.054 KB       8 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
  D:\DPUMP\FLASHBACK_HR1.DMP
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 11:46:41


Enjoy     :-)