Wednesday, April 20, 2011

Manually Creating an Oracle 11g Database

Creating manually oracle 11g database is just the same as 10g . Here are steps to create the oracle Database manually .

Step 1:  Create initialization parameter file name 
Create initialization parameter file name as initdelhi.ora .Below are the contents of my initdelhi.ora:-

audit_file_dest='D:\oracle\admin\delhi\adump'
compatible='11.1.0.0.0'
control_files='D:\ORACLE\ORADATA\DELHI\CONTROL01.CTL'
db_block_size=8192
db_name='delhi'
diagnostic_dest='D:\oracle'
remote_login_passwordfile='EXCLUSIVE'
undo_tablespace='UNDOTBS1'

Step 2 :  Create Oracle service

C:\>  oradim    -new    -sid    delhi    -startmode  m     -intpwd    oracle 
 Instance created.

Step 3: Create the folder 
Create the folder as specified location in pfile (i.e in oradata and admin folder) .

Step 4Create Database

C:\>set ORACLE_SID=delhi
C:\>sqlplus / as sysdba
SQL*Plus: Release 11.1.0.6.0 - Production on Wed Apr 20 12:43:03 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> startup nomount pfile='D:\oracle\admin\delhi\pfile\initdelhi.ora';
ORACLE instance started.
Total System Global Area  150667264 bytes
Fixed Size                  1331740 bytes
Variable Size              92278244 bytes
Database Buffers           50331648 bytes
Redo Buffers                6725632 bytes

SQL>  create database
  2  datafile 'D:\oracle\oradata\delhi\system01.dbf' size 1G
  3  sysaux datafile 'D:\oracle\oradata\delhi\sysaux.dbf' size 800M
  4  UNDO TABLESPACE "UNDOTBS1"
  5  DATAFILE 'D:\oracle\oradata\delhi\UNDOTBS01.DBF' size 200m
  6  LOGFILE group 1 'D:\oracle\oradata\delhi\REDO01.LOG' SIZE 100M,
  7  group 2'D:\oracle\oradata\delhi\REDO02.LOG' SIZE 100M,
  8  group 3'D:\oracle\oradata\delhi\REDO03.LOG' SIZE 100M;

Database created.

Step 5 : Run the following scripts

C:\>@ D:\oracle\product\11.1.0\db_1\RDBMS\ADMIN\catalog.sql
C:\>@
D:\oracle\product\11.1.0\db_1\RDBMS\ADMIN\catproc.sql

SQL> select name,open_mode from v$database;
NAME      OPEN_MODE
--------- ----------
DELHI     READ WRITE
SQL> select program from v$session;
PROGRAM
sqlplus.exe
ORACLE.EXE (q000)
ORACLE.EXE (QMNC)
ORACLE.EXE (W000)
ORACLE.EXE (FBDA)
ORACLE.EXE (SMCO)
ORACLE.EXE (DBRM)
ORACLE.EXE (PMON)
ORACLE.EXE (DIAG)
ORACLE.EXE (DBW0)
ORACLE.EXE (CKPT)
ORACLE.EXE (SMON)
ORACLE.EXE (RECO)
ORACLE.EXE (MMON)
ORACLE.EXE (MMNL)
ORACLE.EXE (LGWR)
ORACLE.EXE (VKTM)
ORACLE.EXE (DIA0)
ORACLE.EXE (MMAN)
ORACLE.EXE (PSP0)
sqlplus.exe
21 rows selected.



Enjoy         J J J


Table_exists_action Parameter of Data Pump

Sometimes it happens that we need to import table into an existing table.If we import the table in that schemas it throws error regarding the existence of the particular table.If we have to preserve the old data of table and append the new data,we can use the table_exists_action parameter of data pump.The valid key words are {SKIP | APPEND | TRUNCATE | REPLACE}.

The possible values of the following effects are :

1.) SKIP  : leaves the table as is and moves on to the next object. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.By default the value is SKIP .

2.) APPEND loads rows from the source and leaves existing rows unchanged.

3.) TRUNCATE deletes existing rows and then loads rows from the source.

4.) REPLACE drops the existing table and then creates and loads it from the source. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.

Here is a  DEMO of the TABLE_EXISTS_ACTION parameter  :

First of all we will take the export table (say test ) which is in neer schemas.

C:\>expdp system/xxxx@noida directory=dpump tables=neer.test dumpfile=neer_test.dmp logfile=exp_neerlog.log
Export: Release 11.1.0.6.0 - Production on Tuesday, 19 April, 2011 13:21:28
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/********@noida directory=dpump tables=neer.test dumpfile=neer_test.dmp logfile=exp_neerlog.log
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "NEER"."TEST"                               5.062 KB       9 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  D:\DPUMP\NEER_TEST.DMP
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 13:23:13

Now we consider each of the valid keywords of  action_exists_append parameter.

Case 1 : action_exists_append=skip (by defaults)

C:\>impdp system/xxxx@noida directory=dpump full=y  dumpfile=neer_test.dmp logfile=imp_neerlog.log
Import: Release 11.1.0.6.0 - Production on Tuesday, 19 April, 2011 13:32:22
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@noida directory=dpump full=y dumpfile=neer_test.dmp logfile=imp_neerlog.log
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39151: Table "NEER"."TEST" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 13:32:35

Hence, above results shows that the table is skipped .

Case 2 : table_exists_action=append  
Now we delete the table test and recreate populate it values.

SQL> drop table test;
Table dropped.
SQL> create table test (id number);
Table created.

SQL> insert into test values (&Y);
Enter value for y: 111
old   1: insert into test values (&Y)
new   1: insert into test values (123)
1 row created.

SQL> /
Enter value for y: 222
old   1: insert into test values (&Y)
new   1: insert into test values (234)
1 row created.

SQL> /
Enter value for y: 333
old   1: insert into test values (&Y)
new   1: insert into test values (345)
1 row created.

SQL> commit;
Commit complete.

SQL> select * from test;
        ID
----------
       111
       222
       333

Now we will import the dump in neer schemas having table "test"

SQL>HOST impdp system/xxxx@noida directory=dpump dumpfile=NEER_TEST.dmp table_exists_action=append
Import: Release 11.1.0.6.0 - Production on Tuesday, 19 April, 2011 14:22:39
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@noida directory=dpump dumpfile=NEER_TEST.dmp table_exists_action=append
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39152: Table "NEER"."TEST" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
Processing     object type         TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "NEER"."TEST"                               5.062 KB       9 rows
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 14:22:58

SQL> select * from test;
        ID
----------
       111
       222
       333
        11
        22
        33
        44
        55
        66
        77
        88

        ID
----------
        99
12 rows selected.

Hence we find that the imported table appends in existing table . This parameter only import the data of the tables and skips the indexes  .

Case 3 :  table_exists_action=truncate
we have already 12 rows in table "test" .Now we again import the dump having 9 rows.

SQL>host impdp system/xxxx@noida directory=dpump dumpfile=NEER_TEST.dmp table_exists_action=truncate
Import: Release 11.1.0.6.0 - Production on Tuesday, 19 April, 2011 14:26:35
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@noida directory=dpump dumpfile=NEER_TEST.dmp table_exists_action=truncate
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39153: Table "NEER"."TEST" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "NEER"."TEST"                               5.062 KB       9 rows
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 14:26:51

SQL> select * from test;
        ID
----------
        11
        22
        33
        44
        55
        66
        77
        88
        99
9 rows selected.

Case 4  : table_exists_action= replace 
Now we will add few rows in table "test" to check the results.

SQL> insert into test values(1234);
1 row created.
SQL> insert into test values(12345);
1 row created.
SQL> insert into test values(34567);
1 row created.
SQL> commit;
Commit complete.

SQL> select * from test;
        ID
----------
        11
        22
        33
        44
        55
        66
        77
        88
        99
      1234
     12345
     34567
12 rows selected.

SQL>host impdp system/xxxx@noida directory=dpump dumpfile=NEER_TEST.dmp table_exists_action=replace
Import: Release 11.1.0.6.0 - Production on Tuesday, 19 April, 2011 14:33:23
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@noida directory=dpump dumpfile=NEER_TEST.dmp table_exists_action=replace
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "NEER"."TEST"                               5.062 KB       9 rows
Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at 14:33:42

SQL> select  *  from test ; 
        ID
----------
        11
        22
        33
        44
        55
        66
        77
        88
        99
9 rows selected.

Hence, table_exists_action=replace parameter internally drop and recreate the table .Hence all the existing metadata also  get dropped and is recreated .
Note: Parameter table_exists_action=replace for a job with no metadata will not get imported .



Enjoy   J J J



Tuesday, April 19, 2011

ORA-25153: Temporary Tablespace is Empty


Temporary tablespaces are used to manage space for database sort operations and for storing global temporary tables. Once while exporting table through Data Pump ora-25153 occurs.

C:\>expdp system/ramtech@noida tables=neer.test dumpfile=neer_test.dmp logfile=exp_neerlog.log
Export: Release 11.1.0.6.0 - Production on Tuesday, 19 April, 2011 12:47:37
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-25153: Temporary Tablespace is Empty
ORA-06512: at "SYS.DBMS_LOB", line 555
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3487
ORA-06512: at line 1

Cause :: The cause for the ORA-25153 error is because attempt was made to use space in a temporary tablespace with no files (no datafiles defined).

Solution  :: To solve this problem, i just added a temp file (datafiles) to the TEMP tablespace by using ADD TEMPFILE command,

SQL> alter tablespace temp add tempfile 'D:\oracle\oradata\noida\temp02.dbf' size 50m;
Tablespace altered.

If we check and found that TEMP tablespace already has data files, check the default temporary tablespace for all users and your database and set the default temporary tablespace to a valid temporarary tablespace.

To check the default temporary tablespace of the database:
SQL> select property_name, property_value from database_properties;


Performance Tuning of Data Pump

The Data Pump utilities are designed especially for very large databases. If our site has very large quantities of data versus metadata, we should experience a dramatic increase in performance compared to the original Export and Import utilities. 

The Data Pump Export and Import utilities enable us to dynamically increase and decrease resource consumption for each job. This is done using the PARALLEL parameter to specify a degree of parallelism for the job. (The PARALLEL parameter is the only tuning parameter that is specific to Data Pump.) For maximum throughput, do not set PARALLEL to much more than 2x the CPU count.

There are few initialization parameters that affect Data Pump performance. Setting  certain initialization parameters can affect the performance of Data Pump Export and Import. In particular, we can try using the following settings to improve performance, although the effect may not be the same on all platforms.

DISK_ASYNCH_IO=TRUE
DB_BLOCK_CHECKING=FALSE
DB_BLOCK_CHECKSUM=FALSE

Additionally, the following initialization parameters must have values set high enough to allow for maximum parallelism:

PROCESSES
SESSIONS
PARALLEL_MAX_SERVERS

To check the performance by setting one of the parameter say of parallel. Click here 


Loss Of Control-file in various Scenario's

Control file is a small binary file that is part of an Oracle database. The control file is used to keep       track of the database's status and physical structure. The control file is absolutely crucial to database operation . Here , we will discuss the various scenario's when control file(s) get lost or corrupt.

CASE 1 : If one of the controlfile get lost or corrupted 
when the database is shut down and on  startup we get the following error due to loss of controlfile.

C:\>sqlplus sys/xxxx@noida as sysdba
SQL*Plus: Release 11.1.0.6.0 - Production on Mon Apr 18 15:41:33 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> startup
ORACLE instance started.
Total System Global Area  318046208 bytes
Fixed Size                  1332920 bytes
Variable Size             239077704 bytes
Database Buffers           71303168 bytes
Redo Buffers                6332416 bytes
ORA-00205: error in identifying control file, check alert log for more info 

Checked the Alert log file and the following information are in  the alert log file.
ALTER DATABASE   MOUNT
Mon Apr 18 15:42:12 2011
ORA-00210: cannot open the specified control file
ORA-00202: control file: 'D:\ORACLE\ORADATA\NOIDA\CONTROL02.CTL'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
Mon Apr 18 15:42:14 2011
Checker run found 1 new persistent data failures
ORA-205 signalled during: ALTER DATABASE   MOUNT

To solve this issue, copy one of the existing control file (say control01.ctl or control03.ctl ) and paste it where the missing  control file was earlier residing and rename the controlfile which one is missing, as in above example, control file (CONTROL02.CTL) is missing and then following the below steps:

SQL> alter database mount;
Database altered.

SQL> alter database open;
Database altered.

SQL> select name,open_mode from v$database ;
NAME          OPEN_MODE
---------           ----------
NOIDA          READ WRITE

CASE  2:  When all the controlfile are lost 
If  we  have  valid  backup and  if  all  the  control files  are  lost  then  we  can  recover  the control  files from autobackup of controlfile or by specifying the location of autobackup control file.

C:\>sqlplus sys/xxxx@noida as sysdba
SQL*Plus: Release 11.1.0.6.0 - Production on Mon Apr 18 16:21:55 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> startup nomount
ORACLE instance started.
Total System Global Area  318046208 bytes
Fixed Size                  1332920 bytes
Variable Size             272632136 bytes
Database Buffers           37748736 bytes
Redo Buffers                6332416 bytes

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

C:\>rman target sys/xxxx@noida

Recovery Manager: Release 11.1.0.6.0 - Production on Mon Apr 18 16:33:34 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
connected to target database: NOIDA (not mounted)

RMAN> restore controlfile from 'D:\orcl_bkp\cf\C-1502483083-20110418-01';  (location of controlfile)
Starting restore at 18-APR-11
using channel ORA_DISK_1
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
output file name=D:\ORACLE\ORADATA\NOIDA\CONTROL01.CTL
output file name=D:\ORACLE\ORADATA\NOIDA\CONTROL02.CTL
output file name=D:\ORACLE\ORADATA\NOIDA\CONTROL03.CTL
Finished restore at 18-APR-11

RMAN> alter database mount;
database mounted
released channel: ORA_DISK_1

RMAN> recover database;
Starting recover at 18-APR-11
Starting implicit crosscheck backup at 18-APR-11
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=153 device type=DISK
Crosschecked 7 objects
Finished implicit crosscheck backup at 18-APR-11
Starting implicit crosscheck copy at 18-APR-11
using channel ORA_DISK_1
Finished implicit crosscheck copy at 18-APR-11
searching for all files in the recovery area
cataloging files...
no files cataloged
using channel ORA_DISK_1
starting media recovery
archived log for thread 1 with sequence 19 is already on disk as file D:\ORACLE\ORADATA\NOIDA\RE
archived log file name=D:\ORACLE\ORADATA\NOIDA\REDO01.LOG thread=1 sequence=19
media recovery complete, elapsed time: 00:00:05
Finished recover at 18-APR-11

RMAN> alter database open resetlogs;
database opened

CASE 3 :   When we donot have any backup and and all control files are lost or corrupted

SQL> startup nomount
ORACLE instance started.
Total System Global Area  318046208 bytes
Fixed Size                  1332920 bytes
Variable Size             281020744 bytes
Database Buffers           29360128 bytes
Redo Buffers                6332416 bytes

Now we create the controlfile manually on command prompt 

SQL> CREATE CONTROLFILE REUSE DATABASE  "NOIDA"   NORESETLOGS archivelog
MAXLOGFILES 5
MAXLOGMEMBERS 3
MAXDATAFILES 10
MAXINSTANCES 1
MAXLOGHISTORY 113
LOGFILE
GROUP 1 'D:\oracle\oradata\noida\REDO01.LOG' SIZE 50M,
GROUP 2 'D:\oracle\oradata\noida\REDO02.LOG' SIZE 50M,
GROUP 3 'D:\oracle\oradata\noida\REDO03.LOG' SIZE 50M
DATAFILE
'D:\oracle\oradata\noida\SYSTEM01.DBF' ,
'D:\oracle\oradata\noida\USERS01.DBF' ,
'D:\oracle\oradata\noida\EXAMPLE01.DBF' ,
'D:\oracle\oradata\noida\SYSAUX01.DBF' ,
'D:\oracle\oradata\noida\TRANS.DBF' ,
'D:\oracle\oradata\noida\UNDOTBS01.DBF'   ;
Control file created.

SQL> archive log list
Database log mode                              Archive Mode
Automatic archival                               Disabled
Archive destination                              D:\archive\
Oldest online log sequence                  1
Next log sequence to archive              1
Current log sequence                          1

SQL> select first_change# ,group# from v$log;
FIRST_CHANGE#     GROUP#
------------- ----------
      1313491          1
            0                3
            0               2

SQL> alter database open;

SQL> select name,open_mode from v$database;
NAME      OPEN_MODE
---------     ----------
NOIDA     READ WRITE


Enjoy   J J J