Tuesday, May 31, 2011

Post Checking After Database Clonning

As I have already covered the clonning through different method .Here are some Post checklist to check the cloned database  environment.

1.) Check Database Name and DBID :
After successful creation of clonning on same or different server we should try to change the database name and id.(in case of same server, clone has different name) .It is suggested to changed the DBID of clone database because Recovery Manager (RMAN) distinguishes databases by DBID. For more details click here  

2.) Edit the Listener.ora and tnsname.ora files :
Edit the listener file to include the new entry of the clonned database.Invoke  the lsnrctl utility and reload the listener. Similarly check the service name in the tnsnames.ora file. Make sure the connection should be connected to cloned database.

3.) Check all the schedule jobs :
If we have scheduled any scripts ,then make sure that all the jobs are enabled for the clonned database.

4.) Verify the parameter file :
Make sure the location mention in the pfile of the clonned database should be correct. The location and values of the parameter should be valid.

5.) Check tempfile  :
After the successful creation of the clone database ,check the tempfile. If there is no tempfile then add the new tempfile in clone database.

6.) Check the archivelog :
Generally archivelog mode is disabled for UAT/Cloned databases. In case our production database is in archivelog mode, make sure that the clonned database archiving is disable. 

7.) Check for database link :
Check for database link present in the cloned environment. Ensure that these are select only dblinks and will not perform any DML in production databases. If  find any ,then you can either drop these or recreate them to point to any UAT or simply remove/hash out tnsnames entry corresponding to these hosts. Also check for any hard coded IP address in host column in DBA_DB_LINKS.


Enjoy    :-)


How To Change DBID and DB NAME


DBNEWID is a database utility that can change the internal database Identifier(DBID) and database name (DBNAME). The DBNEWID utility solves this problem by allowing us to change any of the following:                                                                                  

 1.) Only the DBID of a database
 2.) Only the DBNAME of a database
 3.) Both the DBNAME and DBID of a database

1.) Only the DBID of a database :   The DBID is an internal, unique identifier for a database. Because Recovery Manager (RMAN) distinguishes databases by DBID, we could not register a seed database and a manually copied database together in the same RMAN repository.                                                                  
Changing the DBID of a database is a serious procedure. When the DBID of a database is changed, all previous backups and archived logs of the database become unusable. This is similar to creating a database except that the data is already in the datafiles. After we change the DBID, backups and archive logs that were created prior to the change can no longer be used because they still have the original DBID, which does not match the current DBID. we must open the database with the RESETLOGS option, which re-creates the online redo logs and resets their sequence to 1 .

Below are steps to change the dbid of the database.

C:\>set ORACLE_SID=test
C:\>sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Mon May 30 14:16:48 2011
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area            263639040 bytes
Fixed Size                                    1373964 bytes
Variable Size                                  205523188 bytes
Database Buffers                            50331648 bytes
Redo Buffers                                  6410240 bytes
Database mounted.
Database opened.
SQL> select name,dbid from v$database;
NAME               DBID
---------             ----------
TEST              2050645904

SQL> archive log list
Database log mode                   Archive Mode
Automatic archival                     Enabled
Archive destination                    USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence       10
Next log sequence to archive    12
Current log sequence                12
SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount
ORACLE instance started.
Total System Global Area                263639040 bytes
Fixed Size                                       1373964 bytes
Variable Size                                   205523188 bytes
Database Buffers                             50331648 bytes
Redo Buffers                                   6410240 bytes
Database mounted.

SQL> host nid target=sys/test@test

DBNEWID: Release 11.2.0.1.0 - Production on Mon May 30 14:25:13 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to database TEST (DBID=2050645904)
Connected to server version 11.2.0
Control Files in database:
    D:\APP\NEERAJS\ORADATA\TEST\CONTROL01.CTL
    D:\APP\NEERAJS\FLASH_RECOVERY_AREA\TEST\CONTROL02.CTL

Change database ID of database TEST? (Y/[N]) => Y
Proceeding with operation
Changing database ID from 2050645904 to 2050636234
Control File D:\APP\NEERAJS\ORADATA\TEST\CONTROL01.CTL - modified
Control FileD:\APP\NEERAJS\FLASH_RECOVERY_AREA\TEST\CONTROL02.CTL - modified
    Datafile D:\APP\NEERAJS\ORADATA\TEST\SYSTEM01.DB - dbid changed
    Datafile D:\APP\NEERAJS\ORADATA\TEST\SYSAUX01.DB - dbid changed
    Datafile D:\APP\NEERAJS\ORADATA\TEST\UNDOTBS01.DB - dbid changed
    Datafile D:\APP\NEERAJS\ORADATA\TEST\USERS01.DB - dbid changed
    Datafile D:\APP\NEERAJS\ORADATA\TEST\EXAMPLE01.DB - dbid changed
    Datafile D:\APP\NEERAJS\ORADATA\TEST\TEMP01.DB - dbid changed
Control File D:\APP\NEERAJS\ORADATA\TEST\CONTROL01.CTL - dbid changed
Control File D:\APP\NEERAJS\FLASH_RECOVERY_AREA\TEST\CONTROL02.CTL
    dbid changed
    Instance shut down

Database ID for database TEST changed to 2050636234. All previous backups and archived redo logs for this database are unusable.Database is not aware of previous backups and archived logs in Recovery Area.Database has been shutdown, open database with RESETLOGS option.
Succesfully changed database ID.
DBNEWID - Completed succesfully.

C:\>sqlplus sys/test@test as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Mon May 30 14:26:13 2011
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
Connected to an idle instance.
SQL> startup mount
ORACLE instance started.
Total System Global Area       263639040 bytes
Fixed Size                                 1373964 bytes
Variable Size                             205523188 bytes
Database Buffers                      50331648 bytes
Redo Buffers                            6410240 bytes
Database mounted.
SQL> alter database open resetlogs;
Database altered.

SQL> select name,dbid from v$database;
NAME                  DBID
---------              ----------
TEST                   2050636234

SQL> archive log list
Database log mode                     Archive Mode
Automatic archival                      Enabled
Archive destination                     USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence         1
Next log sequence to archive      1
Current log sequence                  1

2.) Only the DBNAME of a database  :  Changing the DBNAME without changing the DBID does not require us to open with the RESETLOGS option, so database backups and archived logs are not invalidated. However, changing the DBNAME does have consequences. We must change the DB_NAME initialization parameter after a database name change to reflect the new name. Also, we may have to re-create the Oracle password file. If we restore an old backup of the control file (before the name change), then we should use the initialization parameter file and password file from before the database name change.

Below are steps to change the database name  of the database :
SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area         263639040 bytes
Fixed Size                               1373964 bytes
Variable Size                           209717492 bytes
Database Buffers                     46137344 bytes
Redo Buffers                          6410240 bytes
Database mounted.
SQL> host nid TARGET=SYS/test@test DBNAME=newtest SETNAME=YES
DBNEWID: Release 11.2.0.1.0 - Production on Mon May 30 14:35:45 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to database TEST (DBID=2050636234)
Connected to server version 11.2.0
Control Files in database:
    D:\APP\NEERAJS\ORADATA\TEST\CONTROL01.CTL
    D:\APP\NEERAJS\FLASH_RECOVERY_AREA\TEST\CONTROL02.CTL

Change database name of database TEST to NEWTEST? (Y/[N]) => Y
Proceeding with operation
Changing database name from TEST to NEWTEST
Control File D:\APP\NEERAJS\ORADATA\TEST\CONTROL01.CTL - modified
Control File D:\APP\NEERAJS\FLASH_RECOVERY_AREA\TEST\CONTROL02.CTL - modified
    Datafile D:\APP\NEERAJS\ORADATA\TEST\SYSTEM01.DB - wrote new name
    Datafile D:\APP\NEERAJS\ORADATA\TEST\SYSAUX01.DB - wrote new name
    Datafile D:\APP\NEERAJS\ORADATA\TEST\UNDOTBS01.DB - wrote new name
    Datafile D:\APP\NEERAJS\ORADATA\TEST\USERS01.DB - wrote new name
    Datafile D:\APP\NEERAJS\ORADATA\TEST\EXAMPLE01.DB - wrote new name
    Datafile D:\APP\NEERAJS\ORADATA\TEST\TEMP01.DB - wrote new name
    Control File D:\APP\NEERAJS\ORADATA\TEST\CONTROL01.CTL - wrote new name
   Control File D:\APP\NEERAJS\FLASH_RECOVERY_AREA\TEST\CONTROL02.CTL - wrote   new name
    Instance shut down

Database name changed to NEWTEST. Modify parameter file and generate a new password file before restarting.Succesfully changed database name.
DBNEWID - Completed succesfully.

SQL> exit
C:\>sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Mon May 30 14:36:39 2011
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> create pfile='c:\inittest.ora' from spfile;
File created.

Now open the pfile inittest.ora and replace db_name parameter value "test"  to "newtest"

SQL> create spfile from  pfile='c:\inittest.ora' ;
File created.
SQL> startup
ORACLE instance started.
Total System Global Area           263639040 bytes
Fixed Size                                  1373964 bytes
Variable Size                              209717492 bytes
Database Buffers                       46137344 bytes
Redo Buffers                             6410240 bytes
Database mounted.
Database opened.
SQL> select open_mode ,name from v$database;
OPEN_MODE            NAME
-------------------- ---------
READ WRITE           NEWTEST

3.) Both the DBNAME and DBID of a database

C:\>sqlplus sys/test@newtest as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Mon May 30 15:54:45 2011
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select dbid, name,open_mode from v$database;
      DBID                       NAME                       OPEN_MODE
----------                       ---------                     -----------------
2050636234                   NEWTEST                 READ WRITE

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

C:\>sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Mon May 30 16:01:03 2011
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> startup mount
ORACLE instance started.
Total System Global Area             263639040 bytes
Fixed Size                                     1373964 bytes
Variable Size                                 213911796 bytes
Database Buffers                           41943040 bytes
Redo Buffers                                 6410240 bytes
Database mounted.

SQL> host nid target=sys/test@newtest dbname=prod

DBNEWID: Release 11.2.0.1.0 - Production on Mon May 30 16:11:17 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to database NEWTEST (DBID=2050636234)
Connected to server version 11.2.0
Control Files in database:
    D:\APP\NEERAJS\ORADATA\TEST\CONTROL01.CTL
    D:\APP\NEERAJS\FLASH_RECOVERY_AREA\TEST\CONTROL02.CTL

Change database ID and database name NEWTEST to PROD? (Y/[N]) => Y
Proceeding with operation
Changing database ID from 2050636234 to 164330150
Changing database name from NEWTEST to PROD
Control File D:\APP\NEERAJS\ORADATA\TEST\CONTROL01.CTL - modified
Control FileD:\APP\NEERAJS\FLASH_RECOVERY_AREA\TEST\CONTROL02.CTL modifd
Datafile D:\APP\NEERAJS\ORADATA\TEST\SYSTEM01.DB - dbid changed, wrote new name
Datafile D:\APP\NEERAJS\ORADATA\TEST\SYSAUX01.DB - dbid changed, wrote new name
Datafile D:\APP\NEERAJS\ORADATA\TEST\UNDOTBS01.DB-dbid changed,wrote new name
Datafile D:\APP\NEERAJS\ORADATA\TEST\USERS01.DB - dbid changed, wrote new name
Datafile D:\APP\NEERAJS\ORADATA\TEST\EXAMPLE01.DB - dbid changed, wrote new name
Datafile D:\APP\NEERAJS\ORADATA\TEST\TEMP01.DB - dbid changed, wrote new name
Control File D:\APP\NEERAJS\ORADATA\TEST\CONTROL01.CTLdbid changd,wrote new name  Control File D:\APP\NEERAJS\FLASH_RECOVERY_AREA\TEST\CONTROL02.CTL - dbid changed, wrote new name 
Instance shut down

Database name changed to PROD.  Modify parameter file and generate a new password file before restarting. Database ID for database PROD changed to 164330150.  All previous backups and archived redo logs for this database are unusable.Database is not aware of previous backups and archived logs in  Recovery Area.  Database has been shutdown, open database with RESETLOGS option.  Succesfully changed database name and ID.
DBNEWID - Completed succesfully.

SQL> exit
C:\>sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Mon May 30 16:43:18 2011
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
Connected to an idle instance.
SQL> startup mount
ORACLE instance started.
Total System Global Area  263639040 bytes
Fixed Size                  1373964 bytes
Variable Size             209717492 bytes
Database Buffers           46137344 bytes
Redo Buffers                6410240 bytes
Database mounted.
SQL> alter database open resetlogs;
Database altered.
SQL> select name,dbid,open_mode from v$database;
NAME              DBID                    OPEN_MODE
---------          ----------                ----------------
PROD            164330150                READ WRITE


Enjoy      :-)

Saturday, May 28, 2011

ORA-27300: OS system dependent operation, ORA-27301 , ORA-27302


Once while monitering i found the the below error is periodically occuring in my alert log file. The error is 
Process startup failed, error stack:
Errors in file d:\app\diag\rdbms\royal1\royal1\trace\royal1_psp0_1700.trc:
ORA-27300 : OS system dependent operation:CreateThread failed with status: 8
ORA-27301 :  OS failure message: Not enough storage is available to process this command.
ORA-27302 : failure occurred at: ssthrddcr.


From the above error message , I guess that the above message has some relation with OS . On googling , i found that the above message may be a BUG and refer me to check Metalink BUG 3411021 . After going  through some documententaion, we come to the  conclusion that  : 


The system lacked the necessary resources to create another process, or the system-imposed limit on the total number of processes under execution system-wide or by a single user {CHILD_MAX} would be exceeded.

In my case, Our server is having 24GB of RAM and Database is on window server . When i have check the virtual memory(Swap Memory) , I found that it is having 24GB virtual memory and recommended memory is 36GB and it is system managed . Finally,I have decided to increase the Virtual Memory(Swap Memory) . I have increased the virtual memory to 40GB and on next time i found that no such error is occurring .


To solve this error ensure that the existing memory is sufficient and functioning properly. Check for OS log message for any hardware errors. The another option to solve this issue is to decrease the size of SGA so that OS process can get more memory. Hope this help to solve this issue.


Enjoy       :-)   

Friday, May 27, 2011

What is Data Masking ?


Data masking is the process of protecting sensitive information in non-production databases from inappropriate visibility. After sanitization, the database remains perfectly usable - the look-and-feel is preserved - but the information content is secure. The Data Masker software provides a simple, repeatable and "push-button" method of scrambling data in test systems.

It is important to be aware that data masking is appropriate to more than just personal details – sometimes business confidential information is appropriate for masking as well. For example, it may be desirable to prevent quarterly sales figures for some products being present in an outsourced test database.

Why Mask Data ?                                                                                                                                      
1.) Legal Requirements :  The regulatory environment surrounding the duties and obligations of a data holder to protect the information they maintain are becoming increasingly rigorous in just about every legal jurisdiction. It is a pretty safe assumption that the standards for the security and maintenance of data will become increasingly strict in the future .

2.) Loss of Confidence And Public Relations Disasters :   It can reasonably be said in most locations, that if a data escape happens at our organization, then the formal legal sanctions applied by governmental bodies is not the only problem we will be facing. Possibly it may not even be the biggest of our immediate worries.

3.) Malicious Exposure :    Most people think the major risk to the information they hold is external entities (and organized syndicates) out to break in and steal the data. The assumption then follows that protecting the network and firewalls is the appropriate and sufficient response. There is no denying that such protection is necessary – however it has been shown that in many cases the data is stolen by malicious insiders who have been granted access to the data

4.) Accidental Exposure :   The risk of accidental exposure of information is often neglected when considering the security risks associated with real test data. Often it is thought that “there is no point in masking the test data because everybody has access to production anyways”. Not so, the risks associated with an accidental exposure of the data remain. Often just masking the most sensitive information (credit card numbers, customer email addresses etc) is enough to somewhat mitigate the damage associated with accidental exposure and the masked databases remain just as functional.

What Data To Mask :

I.) Light Masking on a Bug-Fix or Fire-Fighting Database .

II.) Medium Masking on Internal Development Databases .

III.) Thorough Masking on an Outsourced Database .

Is My Data Too Complex to be Masked ?
Not likely. Data Masker handles even the most intricate data structures. It can preserve data relationships between rows in tables, between rows in the same table or even internally between columns in the same row. Data synchronization issues of this type can be automatically handled by the addition of simple, easily configured masking rules. Don't worry about the size of the data either - Data Masker has successfully masked tables containing hundreds of millions of rows.

What Platforms and Databases does Data Masker Support ?
Oracle, SQL Server and DB2 UDB versions are available. The Data Masker software is installed on a Windows PC and operates on both local and remote databases. There are no server side components for any Data Masker version. Oracle versions 9i, 10g, 11g, Sql Server versions 2000, 2005, 2008 and DB2 UDB (LUW) versions 8.2 or greater are supported.

Enjoy     :-)




ORA-04030 : Out of Process Memory

Ora-04030 indicates that the oracle server process is unable to allocate more memory from the operating system.This memory consists of the PGA (Program Global Area) and its contents depend upon the server configuration.For dedicated server processes it contains the stack and the UGA (User Global Area) which holds user session data, cursor information and the sort area. In a multithreaded configuration (shared server), the UGA is allocated in the SGA (System Global Area) and will not be responsible for ORA-4030 errors.

In my case it occurs when a client program connects to oracle database an oracle process doesnot work and throws the Ora-04030 Out of Process Memory.

After getting this error, we troubleshoot this errors and on googling we finally come to conclusion that it is a BUG . Finally we have to taken the support for this error .

This error occurs due to various reason which is listed below :

1.) The Oracle process need more memory in order to request client program and it requests additional memory from Operating System but the Operating System can't serve the request. This is likely to happen if OS does not have enough memory or swap space is not available.

2.) There is a memory limit restriction from OS for oracle process to use. Oracle process already use the limit and now it requested additional memory and hence error appears.

3.) Might be an oracle bug .

Below are few links which are quite useful to understand this error.

http://orainternals.wordpress.com/2009/08/06/ora-4031-and-shared-pool-duration/
http://jhdba.wordpress.com/2009/11/13/problems-with-sga-a-multiiple-of-4gb-and-high-cpu-count/
http://www.tek-tips.com/viewthread.cfm?qid=835937
http://gdesaboyina.wordpress.com/2009/12/04/ora-04030-out-of-process-memory-when-trying-to-allocate-840-bytes-kgsp-heapkglss/


Enjoy      :-)


remap_table Parameter of Data-Pump in Oracle 11g


Oracle 11g datapump provide a new feature remap_table command to remap the table data to new table name on target database.we can use the REMAP_TABLE parameter to rename entire tables.
Syntax :
REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename .

In 10g datapump ,we use the REMAP_SCHEMA parameter to remap the schema name during the import or we use the FROMUSER and TOUSER parameters in the original import . There is no parameter to remap table names . This means that Import DataPump can only import data into a table with the same name as the original table.

If we have to import a table data having same structure into a database i.e, it is containing the table with same name then we have to perform it in two ways .

I.) Rename the original source table temporarily : 

II.) If the original source table cannot be rename then follow the below steps :
a.) import the dump into another schemas.
b.) rename the table name.
c.) Again export the table .
d.) Finally import the table name .

Remap_table allows us to rename tables during an import operation . Here is demo of the remap_table :

Here , we will create a table and take export of it and import it in the same schemas . In this scenario we have table name "test" and we will rename it as "newtest".

1.) Create a table  "test"

SQL> conn hr/hr@noida
Connected.
SQL> create table test(id number);
Table created.
SQL> insert into test values (1);
1 row created.
SQL> insert into test values (2);
1 row created.
SQL> insert into test values (3);
1 row created.
SQL> insert into test values (4);
1 row created.
SQL> commit;
Commit complete.

SQL> select * from test;
        ID
----------
         1
         2
         3
         4

2.) Export the table "test"

SQL> host expdp hr/hr@noida    dumpfile=hr_test.dmp    logfile=hrtestlog.log     tables=test

Export: Release 11.2.0.1.0 - Production on Fri May 27 11:20:43 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "HR"."SYS_EXPORT_TABLE_01":  hr/********@noida dumpfile=hr_test.dmp logfile=hrtestlog.log tables=test
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "HR"."TEST"                                 5.031 KB       4 rows
Master table "HR"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for HR.SYS_EXPORT_TABLE_01 is:
  D:\APP\NEERAJS\ADMIN\NOIDA\DPDUMP\HR_TEST.DMP
Job "HR"."SYS_EXPORT_TABLE_01" successfully completed at 11:21:16

Since,we have the dump of the table "test". We import into hr schemas with new name  "newtest"

3.) Import the dump with remap_table Parameter

SQL>host impdp hr/hr@noida  dumpfile=hr_test.dmp logfile=imphrtestlog.log remap_table=hr.test:newtest
Import: Release 11.2.0.1.0 - Production on Fri May 27 11:22:11 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "HR"."SYS_IMPORT_FULL_04" successfully loaded/unloaded
Starting "HR"."SYS_IMPORT_FULL_04":  hr/********@noida dumpfile=hr_test.dmp logfile=imphrtestlog.log remap_table=hr.test:newtest
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "HR"."NEWTEST"                              5.031 KB       4 rows
Job "HR"."SYS_IMPORT_FULL_04" successfully completed at 11:22:25

Since the job is successfully completed .So we check the imported table i.e, "newtest"

SQL> select * from tab;
TNAME                                     TABTYPE                         CLUSTERID
----------------------                       ------------                         ----------------
COUNTRIES                               TABLE
DEPARTMENTS                         TABLE
EMPLOYEES                              TABLE
EMP_DETAILS_VIEW                VIEW
JOBS                                          TABLE
JOB_HISTORY                           TABLE
LOCATIONS                               TABLE
NEWTEST                                  TABLE
REGIONS                                    TABLE
SYS_IMPORT_FULL_01             TABLE
SYS_IMPORT_FULL_02             TABLE
SYS_IMPORT_FULL_03             TABLE
TEST                                            TABLE

13 rows selected.

SQL> select * from newtest;
        ID
----------
         1
         2
         3
         4

Note : Only objects created by the Import will be remapped. In particular, preexisting tables will not be remapped if TABLE_EXISTS_ACTION is set to TRUNCATE or APPEND

Enjoy   :-) 


Monday, May 23, 2011

Undocumented/Hidden Parameters in Oracle 11g

Oracle contains two type of initialization parameters i.e,

1.) Documented Initialization Parameter.
2.) Undocumented Initialization Parameter. 

Here, We will check about the  undocumented Initialization Parameter .

Undocumented Initialization parameter are also called hidden Parameter and there is no document available about these parameter . Every version of Oracle has special undocumented initialization parameters. These undocumented initialization parameters are usually only used in emergencies or with the express consent of Oracle technical support.

The X$KSPPI table contains all of the parameters that influence the oracle database in operation.  X$tables are available only when we connect to the oracle instance as the user 'SYS' .                               

The current version i.e, Oracle 11g has 2394 parameters, out of which 343 parameters are listed in the V$parameter view. This means that there are still 2051 parameters which are not listed in either V$views or Oracle Documentation . Hidden parameter starts with underscore sign  "_"  . To check all initialization  parameter fire below command.

SQL> select count(*) from X$KSPPI;
  COUNT(*)
----------
      2394

To check documented parameter fire the following comand .

SQL> select count(*) from v$parameter;
  COUNT(*)
----------
       343

Here we will discuss when to use hidden parameter , where to use them, and what tricks are involved in using them. Some issues involved with using these parameters are listed below :

1.) Undocumented parameters change from version to version (even in the same versions from patchset to patchset) without any notice. These parameters are not consistent across versions.

2.) All these parameters may not be available with all the ports. This means that they are not compatible across the operating systems. A few parameters are Operating System specific.

3.) Use of undocumented parameters without the knowledge of Oracle Support is a violation of the Support agreement with Oracle. Oracle Worldwide support has every right to refuse to support the databases affected by using these parameters.

4.) A few parameters can potentially corrupt the databases and some parameters within those few will help us in bringing up the corrupted database with minimal data loss. So these parameters are equally useful inspite of those dangers.

5.) Since no external documentation is available, it is important to test them before using these parameters in our production system. It is also suggested to have a complete backup before using these parameters.

We can obtain all the undocumented Parameter i.e, hidden Parameter by giving the following command .

SQL> select a.ksppinm name, b.ksppstvl value,b.ksppstdf deflt,
decode (a.ksppity, 1,
'boolean', 2,
'string', 3,
'number', 4,
'file', a.ksppity) type, a.ksppdesc description
from
sys.x$ksppi a,
sys.x$ksppcv b
where   a.indx = b.indx
   and
a.ksppinm like '\_%' escape '\'
order by name

We can also check the hidden and non-hidden parameter from memory by creating pfile .The command for creating the pfile is 
SQL> Create pfile='C:\pfile.ora'  from memory;

The above command  will not work in case of 10g or lower version of oracle. In my case the content of  pfile created from memory is as :

__db_cache_size=104M
__java_pool_size=4M
__large_pool_size=4M
__oracle_base='D:\oracle' # ORACLE_BASE set from environment
__pga_aggregate_target=132M
__sga_target=196M
__shared_io_pool_size=0
__shared_pool_size=76M
__streams_pool_size=0
_aggregation_optimization_settings=0
_always_anti_join='CHOOSE'
_always_semi_join='CHOOSE'
_and_pruning_enabled=TRUE
_b_tree_bitmap_plans=TRUE
_bloom_filter_enabled=TRUE
_bloom_folding_enabled=TRUE
_bloom_pruning_enabled=TRUE
_complex_view_merging=TRUE
_compression_compatibility='11.2.0.0.0'
_connect_by_use_union_all='TRUE'
_convert_set_to_join=FALSE
_cost_equality_semi_join=TRUE
_cpu_to_io=0
_dimension_skip_null=TRUE
_eliminate_common_subexpr=TRUE
_enable_type_dep_selectivity=TRUE
_fast_full_scan_enabled=TRUE
_first_k_rows_dynamic_proration=TRUE
_gby_hash_aggregation_enabled=TRUE
_generalized_pruning_enabled=TRUE
_globalindex_pnum_filter_enabled=TRUE
_gs_anti_semi_join_allowed=TRUE
_improved_outerjoin_card=TRUE
_improved_row_length_enabled=TRUE
_index_join_enabled=TRUE
_ksb_restart_policy_times='0'
_ksb_restart_policy_times='60'
_ksb_restart_policy_times='120'
_ksb_restart_policy_times='240' # internal update to set default
_left_nested_loops_random=TRUE
_local_communication_costing_enabled=TRUE
_minimal_stats_aggregation=TRUE
_mmv_query_rewrite_enabled=TRUE
_new_initial_join_orders=TRUE
_new_sort_cost_estimate=TRUE
_nlj_batching_enabled=1
_optim_adjust_for_part_skews=TRUE
_optim_enhance_nnull_detection=TRUE
_optim_new_default_join_sel=TRUE
_optim_peek_user_binds=TRUE
_optimizer_adaptive_cursor_sharing=TRUE
_optimizer_better_inlist_costing='ALL'
_optimizer_cbqt_no_size_restriction=TRUE
_optimizer_coalesce_subqueries=TRUE
_optimizer_complex_pred_selectivity=TRUE
_optimizer_compute_index_stats=TRUE
_optimizer_connect_by_combine_sw=TRUE
_optimizer_connect_by_cost_based=TRUE
_optimizer_connect_by_elim_dups=TRUE
_optimizer_correct_sq_selectivity=TRUE
_optimizer_cost_based_transformation='LINEAR'
_optimizer_cost_hjsmj_multimatch=TRUE
_optimizer_cost_model='CHOOSE'
_optimizer_dim_subq_join_sel=TRUE
_optimizer_distinct_agg_transform=TRUE
_optimizer_distinct_elimination=TRUE
_optimizer_distinct_placement=TRUE
_optimizer_eliminate_filtering_join=TRUE
_optimizer_enable_density_improvements=TRUE
_optimizer_enable_extended_stats=TRUE
_optimizer_enhanced_filter_push=TRUE
_optimizer_extend_jppd_view_types=TRUE
_optimizer_extended_cursor_sharing='UDO'
_optimizer_extended_cursor_sharing_rel='SIMPLE'
_optimizer_extended_stats_usage_control=224
_optimizer_fast_access_pred_analysis=TRUE
_optimizer_fast_pred_transitivity=TRUE
_optimizer_filter_pred_pullup=TRUE
_optimizer_fkr_index_cost_bias=10
_optimizer_group_by_placement=TRUE
_optimizer_improve_selectivity=TRUE
_optimizer_join_elimination_enabled=TRUE
_optimizer_join_factorization=TRUE
_optimizer_join_order_control=3
_optimizer_join_sel_sanity_check=TRUE
_optimizer_max_permutations=2000
_optimizer_mode_force=TRUE
_optimizer_multi_level_push_pred=TRUE
_optimizer_native_full_outer_join='FORCE'
_optimizer_new_join_card_computation=TRUE
_optimizer_null_aware_antijoin=TRUE
_optimizer_or_expansion='DEPTH'
_optimizer_order_by_elimination_enabled=TRUE
_optimizer_outer_to_anti_enabled=TRUE
_optimizer_push_down_distinct=0
_optimizer_push_pred_cost_based=TRUE
_optimizer_rownum_bind_default=10
_optimizer_rownum_pred_based_fkr=TRUE
_optimizer_skip_scan_enabled=TRUE
_optimizer_sortmerge_join_inequality=TRUE
_optimizer_squ_bottomup=TRUE
_optimizer_star_tran_in_with_clause=TRUE
_optimizer_system_stats_usage=TRUE
_optimizer_table_expansion=TRUE
_optimizer_transitivity_retain=TRUE
_optimizer_try_st_before_jppd=TRUE
_optimizer_undo_cost_change='11.2.0.1'
_optimizer_unnest_corr_set_subq=TRUE
_optimizer_unnest_disjunctive_subq=TRUE
_optimizer_use_cbqt_star_transformation=TRUE
_optimizer_use_feedback=TRUE
_or_expand_nvl_predicate=TRUE
_ordered_nested_loop=TRUE
_parallel_broadcast_enabled=TRUE
_partition_view_enabled=TRUE
_pivot_implementation_method='CHOOSE'
_pre_rewrite_push_pred=TRUE
_pred_move_around=TRUE
_push_join_predicate=TRUE
_push_join_union_view=TRUE
_push_join_union_view2=TRUE
_px_minus_intersect=TRUE
_px_pwg_enabled=TRUE
_px_ual_serial_input=TRUE
_query_rewrite_setopgrw_enable=TRUE
_remove_aggr_subquery=TRUE
_replace_virtual_columns=TRUE
_right_outer_hash_enable=TRUE
_selfjoin_mv_duplicates=TRUE
_sql_model_unfold_forloops='RUN_TIME'
_sqltune_category_parsed='DEFAULT' # parsed sqltune_category
_subquery_pruning_enabled=TRUE
_subquery_pruning_mv_enabled=FALSE
_system_trig_enabled=FALSE
_table_scan_cost_plus_one=TRUE
_undo_autotune=FALSE
_union_rewrite_for_gs='YES_GSET_MVS'
_unnest_subquery=TRUE
_use_column_stats_for_function=TRUE
aq_tm_processes=0
audit_file_dest='D:\ORACLE\ADMIN\INDIA\ADUMP'
audit_trail='DB'
compatible='11.2.0.0.0'
control_files='D:\ORACLE\ORADATA\INDIA\CONTROL01.CTL'
control_files='D:\ORACLE\FLASH_RECOVERY_AREA\INDIA\CONTROL02.CTL'
core_dump_dest='d:\oracle\diag\rdbms\india\india\cdump'
db_block_size=8192
db_domain=''
db_name='india'
db_recovery_file_dest='D:\oracle\flash_recovery_area'
db_recovery_file_dest_size=4977M
diagnostic_dest='D:\ORACLE'
dispatchers='(PROTOCOL=TCP) (SERVICE=indiaXDB)'
enable_ddl_logging=FALSE
log_buffer=6029312 # log buffer update
memory_target=328M
open_cursors=300
optimizer_dynamic_sampling=2
optimizer_mode='ALL_ROWS'
plsql_warnings='DISABLE:ALL' # PL/SQL warnings at init.ora
processes=150
query_rewrite_enabled='TRUE'
recyclebin='OFF'
remote_login_passwordfile='EXCLUSIVE'
resource_manager_plan=''
result_cache_max_size=864K
skip_unusable_indexes=TRUE
undo_retention=900
undo_tablespace='UNDOTBS1' 




Enjoy   :-)