Friday, December 9, 2011

Dbms_Metadata Package : To Extract all DDL


There  are various method available to extract the DDL from the oracle database . We can extract the DDL either by  export/import utility , dbms_metadata package ,or by using others third party tools . The best ways to extract the DDL is by using the DBMS_METADATA pacakage if don't have any tools . Here we will discuss the various DDL extracting method to extract the DDL from the Oracle .

The DBMS_METADATA package provides a way to retrieve metadata from the database dictionary as XML or  DDL . We can retrive either a particular object type (such as a table, index, or procedure) or a heterogeneous collection of object types that form a logical unit (such as a database export or schema export) 

One of the very useful function i.e, GET_xxxx functions are used to extract the DDL .  The following GET_xxxx functions fetches the metadata for objects with a single call . They encapsulate calls to OPEN, SET_FILTER, and so on. The function we use depends on the characteristics of the object type and on whether we want XML or DDL.

1.) GET_DDL( )   :  This function is used to fetch named objects, especially schema objects (tables, views). They can also be used with nameless objects, such as resource_cost .

Snytax   :
DBMS_METADATA.GET_DDL (
object_type     IN VARCHAR2,
name            IN VARCHAR2,
schema          IN VARCHAR2 DEFAULT NULL,
version         IN VARCHAR2 DEFAULT 'COMPATIBLE',
model           IN VARCHAR2 DEFAULT 'ORACLE',
transform       IN VARCHAR2 DEFAULT 'DDL')
RETURN CLOB;

Example  : Here we will extract the DDL for the "employees"  table .

SQL> select dbms_metadata.get_ddl('TABLE', 'EMP')   from dual ;

DBMS_METADATA.GET_DDL('TABLE','EMP')
--------------------------------------------------------------------------------
   CREATE TABLE "ALEX"."EMP"
   (    "EMPNO" NUMBER(4,0),
        "ENAME" VARCHAR2(10),
        "JOB" VARCHAR2(9),
        "MGR" NUMBER(4,0),
        "HIREDATE" DATE,
        "SAL" NUMBER(7,2),
        "COMM" NUMBER(7,2),
        "DEPTNO" NUMBER(2,0),
         CONSTRAINT "EMP_DEPT_FK" FOREIGN KEY ("DEPTNO")
          REFERENCES "ALEX"."DEPT" ("DEPTNO") ENABLE
   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
  TABLESPACE "USERS"

Similary , we also extract the DDL of others objects like tablespace,views,sequence and others .

If we want only the DDL  not the storage clause and constraints ,then we need to modify some transformation parameters as

SQL> begin
  2     dbms_metadata.set_transform_param (dbms_metadata.session_transform,'STORAGE',false);
  3     dbms_metadata.set_transform_param (dbms_metadata.session_transform,'TABLESPACE',false);
  4     dbms_metadata.set_transform_param (dbms_metadata.session_transform,'SEGMENT_ATTRIBUTES', false);
  5     dbms_metadata.set_transform_param (dbms_metadata.session_transform,'REF_CONSTRAINTS', false);
  6     dbms_metadata.set_transform_param (dbms_metadata.session_transform,'CONSTRAINTS', false);
  7  end;
  8  /
 PL/SQL procedure successfully completed.

Now on running the same above statements give the following results .

SQL> select dbms_metadata.get_ddl ('TABLE', 'EMP')  from dual ;
 DBMS_METADATA.GET_DDL('TABLE','EMP')
--------------------------------------------------------------------------------
   CREATE TABLE "ALEX"."EMP"
   (    "EMPNO" NUMBER(4,0),
        "ENAME" VARCHAR2(10),
        "JOB" VARCHAR2(9),
        "MGR" NUMBER(4,0),
        "HIREDATE" DATE,
        "SAL" NUMBER(7,2),
        "COMM" NUMBER(7,2),
        "DEPTNO" NUMBER(2,0)
   )

One of the nice things is that we don't need to modify all the transformation parameters again to go back to the default. They made it really easy to return to the default settings:

SQL> begin
  2     dbms_metadata.set_transform_param (dbms_metadata.session_transform, 'DEFAULT');
  3  end;
  4  /
 PL/SQL procedure successfully completed.

2.) GET_DEPENDENT_DDL( ) : This function is used to fetch dependent objects (audits, object grants).
 Syntax :
DBMS_METADATA.GET_DEPENDENT_DDL (
object_type         IN VARCHAR2,
base_object_name    IN VARCHAR2,
base_object_schema  IN VARCHAR2 DEFAULT NULL,
version             IN VARCHAR2 DEFAULT 'COMPATIBLE',
model               IN VARCHAR2 DEFAULT 'ORACLE',
transform           IN VARCHAR2 DEFAULT 'DDL',
object_count        IN NUMBER   DEFAULT 10000)
RETURN CLOB ;

Example :   In this example, we will extract the reference constraints which is dependent on another tables .

SQL > select dbms_metadata.get_dependent_ddl( 'REF_CONSTRAINT', table_name) DDL
              FROM USER_TABLES WHERE table_name = 'EMPLOYEES' ;
ALTER  TABLE  "HR"."EMPLOYEES" ADD CONSTRAINT  "EMP_DEPT_FK"  FOREIGN KEY ("DEPARTMENT_ID")  REFERENCES  "HR"."DEPARTMENTS"  ("DEPARTMENT_ID") ENABLE

ALTER  TABLE "HR"."EMPLOYEES"  ADD  CONSTRAINT "EMP_JOB_FK"  FOREIGN KEY ("JOB_ID") REFERENCES  "HR"."JOBS"  ("JOB_ID") ENABLE

ALTER TABLE "HR"."EMPLOYEES" ADD CONSTRAINT "EMP_MANAGER_FK" FOREIGN KEY ("MANAGER_ID")  REFERENCES "HR"."EMPLOYEES" ("EMPLOYEE_ID") ENABLE

3.) GET_GRANTED_DDL( ) : This function  is used to fetch granted objects (system grants, role grants ) to the users of the database .

Syntax : 


DBMS_METADATA.GET_GRANTED_DDL (
object_type     IN VARCHAR2,
grantee         IN VARCHAR2 DEFAULT NULL,
version         IN VARCHAR2 DEFAULT 'COMPATIBLE',
model           IN VARCHAR2 DEFAULT 'ORACLE',
transform       IN VARCHAR2 DEFAULT 'DDL',
object_count    IN NUMBER   DEFAULT 10000)
RETURN CLOB;

Example :  Here we will extract the "system grant" assigned to the user "SYSTEM "

SQL> select dbms_metadata.get_granted_ddl('SYSTEM_GRANT','SYSTEM') from dual;

DBMS_METADATA.GET_GRANTED_DDL('SYSTEM_GRANT','SYSTEM')
--------------------------------------------------------------------------------
  GRANT GLOBAL QUERY REWRITE TO "SYSTEM"
  GRANT CREATE MATERIALIZED VIEW TO "SYSTEM"
  GRANT SELECT ANY TABLE TO "SYSTEM"
  GRANT CREATE TABLE TO "SYSTEM"
  GRANT UNLIMITED TABLESPACE TO "SYSTEM" WITH ADMIN OPTION

To extract all the grants to all the user we can use the below statements  .

SQL> SELECT DBMS_METADATA.GET_GRANTED_DDL(‘ROLE_GRANT’, USERNAME) || ‘/’ DDL FROM DBA_USERS where exists (select ‘x’ from dba_role_privs drp where drp.grantee = dba_users.username)
UNION ALL
SELECT DBMS_METADATA.GET_GRANTED_DDL(‘SYSTEM_GRANT’, USERNAME) || ‘/’ DDL FROM DBA_USERS where exists (select ‘x’ from dba_role_privs drp where drp.grantee = dba_users.username)
UNION ALL
SELECT DBMS_METADATA.GET_GRANTED_DDL(‘OBJECT_GRANT’, USERNAME) || ‘/’ DDL FROM  DBA_USERS where exists (select ‘x’ from dba_tab_privs dtp where  dtp.grantee = dba_users.username);


Enjoy   :-) 

ORA-39083: Object type INDEX_STATISTICS failed ,ORA-01403: no data found

Once while importing , the above error occurred . I have  generated the sql file of this import operation. After analsing and googling , i found that this error may occurred  because any one index  is missing, for some reason, that why  the impdp utility fails importing the statistics associated to that particular missing index. In this case, the problem is generated because expdp utility puts the CREATE INDEX statements in wrong order into the dumpfile . Below are the details .

C:\>impdp system/xxxx@orcl  DUMPFILE=SHAIK72_01.dmp LOGFILE=SHAIK72_imp.log remap_schema= SHAIK72:SHAIK72
Import: Release 11.2.0.1.0 - Production on Fri Dec 9 10:31:59 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/******** DUMPFILE=SHAIK72_01.dmp LOGFILE=SHAIK72_imp.log remap_schema=
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "SHAIK72"."AD_ARCHIVE"                      188.3 MB    1785 rows
. . imported "SHAIK72"."AD_ATTACHMENT"                   81.80 MB     240 rows
. . imported "SHAIK72"."T_REPORTSTATEMENT"               60.03 MB 1012428 rows
. . imported "SHAIK72"."AD_QUERYLOG"                     25.75 MB   98396 rows
. . imported "SHAIK72"."FACT_ACCT"                       25.82 MB  123994 rows
. . imported "SHAIK72"."T_TRIALBALANCE"                  13.90 MB   59977 rows
. . imported "SHAIK72"."AD_WF_ACTIVITY"                  18.46 MB  110882 rows
. . imported "SHAIK72"."AD_WF_EVENTAUDIT"                18.30 MB  110882 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-01403: no data found
ORA-01403: no data found
Failing sql is :
DECLARE I_N VARCHAR2(60);   I_O VARCHAR2(60);   c DBMS_METADATA.T_VAR_COLL;   df varchar2(21) := 'YYYY-MM-DD:HH24:MI:SS'; BEGIN  DELETE FROM "SYS"."IMPDP_STATS";   c(1) :=   DBMS_METADATA.GET_STAT_COLNAME('SHAIK72','C_BPARTNER','NULL ',NULL,0);  DBMS_METADATA.GET_STAT_INDNAME('SHAIK72','C_BPARTNER',c,1,i_o,i_n);   INSERT INTO "SYS"."IMPDP_STATS" (type,version,flags,c1,c2,c
Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
ORA-39082: Object type VIEW:"SHAIK72"."DSI_V_RETAILER_STRUCT" created with compilation warnings
ORA-39082: Object type VIEW:"SHAIK72"."DSI_V_SHIPMENT_DETAILS" created with compilation warnings
ORA-39082: Object type VIEW:"SHAIK72"."M_STORAGE_V" created with compilation warnings
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 6 error(s) at 10:59:21


Here are the workaroud to solve this issue .

1.) Use traditional export/import utility .

2.) First import the dump by excluding the indexes and later import the same dump by including the indexes as below :  
a.)  impdp system/xxxx DUMPFILE=SHAIK72_01.dmp LOGFILE=SHAIK72_imp.log remap_schema= SHAIK72:SHAIK72  exclude=indexes and then

b.)  impdp system/xxxx  DUMPFILE=SHAIK72_01.dmp LOGFILE=SHAIK72_imp1.log remap_schema= SHAIK72:SHAIK72  include=indexes 

3.)  On googling , i found that using parameter  EXCLUDE=STATISTICS may also solve this issue . Check the below link  for this parameter .
http://dbhk.wordpress.com/category/impdp/


Enjoy     :-) 

Wednesday, December 7, 2011

ORA-00600: internal error code, arguments: [kkpo_rcinfo_defstg:objnotfound],[1002300]

Once we have upgraded our database to fix some issue. Everything goes well . On the very next day when we are dropping the user we get the ORA-00600 internal error . The ORA-00600 error has the following arguments

SQL> drop user test cascade ;
ERROR at line 1:
ORA-00600: internal error code, arguments: [kkpo_rcinfo_defstg:objnotfound],[1002300], [], [], [], [], [], [], [], [], [], []

ORA-600  is an internals errors and occur due to various reasons. This issue has been identified as Bug:10373381. In my case this is due to upgradation from lower version to higher version . I have checked the meta-link and found the interesting workaround . The solve this error perform the below steps : 

1.)  Apply the 11.2.0.2.2 PSU Patch: 11724916 or higher :  

If on Exadata, download and apply the 11.2.0.2. Bundle Patch 6 Patch:12326685 or higher (See NOTE: 1314319.1 for latest 11.2.0.2 Exadata patches).
If on Windows, apply the 11.2.0.2 Patch 3 or higher (See NOTE:1114533.1 for latest 11.2.0.2 patch)
32-Bit Patch:11731183
64-Bit (x64) Patch:11731184

2.)  If available for our platform and version, download and apply Patch 10373381

3.)  As a workaround, restore and perform a direct upgrade from 10.2.0.5 to the 11.2.0.2 release.



Enjoy    :-) 


Saturday, December 3, 2011

Transparent Data Encryption in Oracle 11g

Oracle Transparent Data Encryption (TDE) enables the organizations to encrypt sensitive application data on storage media completely transparent to the application. TDE addresses encryption requirements associated with public and private privacy and security regulations such as PCI DSS. TDE column encryption was introduced in Oracle Database 10g Release 2, enabling encryption of table columns containing sensitive information. The TDE tablespace encryption and the support for hardware security modules (HSM) were introduced in Oracle Database 11gR1.

TDE is protecting the data at rest. It is encrypting the data in the datafiles so that in case they are obtained by other parties it will not be possible to access the clear text data.  TDE cannot be used to obfuscate the data for the users who have privileges to access the tables. In the databases where TDE is configured any user who has access on an encrypted table will be able to see the data in clear text because Oracle will transparently decrypt the data for any user having the necessary privileges.
TDE is using a two tier encryption key architecture consisting of  :

  • a master encryption key - this is the encryption key used to encrypt secondary keys used for column encryption and tablespace encryption.
  • one or more table and/or tablespace keys - these are the keys that are used to encrypt one or more specific columns or the keys used to encrypt  tablespaces. There is only one table key regardless of the number of encrypted columns in a table and it will be stored in the data dictionary. The tablespace key is stored in the header of each datafile of the encrypted tablespace. 

The table and tablespace keys are encrypted using the master key. The master key is stored in an external security module (ESM) that can be one of the following:

  • an Oracle Wallet - a secure container outside of the database. It is encrypted with a password. 
  • a Hardware Security Module (HSM) - a device used to secure keys and perform cryptographic operations. 

To start using  TDE the following operations have to be performed:

1.) Make sure that the wallet location exists. If a non default wallet location must be used then specify it in the sqlnet.ora file :

ENCRYPTION_WALLET_LOCATION =
   (SOURCE = (METHOD = FILE)
     (METHOD_DATA =
      (DIRECTORY = C:\app\neerajs\admin\orcl\wallet)
     )
   )

Note : The default encryption wallet location is $ORACLE_BASE/admin/<global_db_name>/wallet. If we want to let Oracle manage a wallet in the default location then there is no need to set the ENCRYPTION_WALLET_LOCATION parameter in sqlnet.ora.

It is important to check that the location specified in sqlnet.ora or the default location exists and can be read/written by the Oracle processes.

2.) Generate a master key :

SQL> alter system set encryption key identified by "wallet_password" ;
system altered

This command will do the following :

A.) If there is no wallet currently in the wallet location then a new wallet with the password "wallet_password" will be generated. The password is enclosed in double quotes to preserve the case of the characters. If the double quotes are not used then the characters of the password will be all in upper case. This command will also cause the new wallet to be opened and ready for use.

B.) A new master key will be generated and will be written to the wallet. This newly generated master key will become the active master key. The old master keys (if there were any) will still be kept in the wallet but they will not be active. They are kept there to be used when decrypting data that was previously encrypted using them .

To see the status of an wallet run the following query:

SQL> select * from v$encryption_wallet;
WRL_TYPE             WRL_PARAMETER                      STATUS
-----------    ------------------------------         -----------
file                C:\app\neerajs\admin\orcl\wallet         OPEN

3.)  Enable encryption for a column or for an entire tablespace:

3.1) Create a table by specifying the encrypt option:

SQL> create table test(col1 number, col2 varchar2(100) encrypt using 'AES256' NO SALT) ;

3.2) Encrypt the column(s) of an existing table :

SQL> alter  table  test  modify( col2 encrypt SALT ) ;

Note : If the table has many rows then this operation might take some time since all the values stored in col2 must be replaced by encrypted strings. If the access to the table during this operations is needed then useOnline Table Redefinition

3.3)  Create an encrypted tablespace : The syntax is the same as creating a normal tablespace except for two clauses:
  • We specify the encryption algorithm – in this case ‘AES256′. If we do not specify this, it will default to ‘AES128′. At the time of tablespace creation specify the encryption and default storage clause.

Define the encryption algorithem as " using 'algorithm' " along with the encryption clause. We can use the following algorithms while creating an encrypted tablespace.
AES128
AES192
AED256
3DES168
If we don't specify any algorithm with the encryption clause it will use AES128 as default.

  •  The DEFAULT STORAGE (ENCRYPT) clause.
SQL> create tablespace encryptedtbs  datafile 'C:\app\neerajs\oradata\orcl\encryptedtbs01.dbf'  size 100M encryption using  'AES256'  default storage(encrypt) ;

Note: An existing  non encrypted tablespace cannot be encrypted. If we must encrypt the data from an entire tablespace then create a new encrypted tablespace and then move the data from the old tablespace to the new one TDE Master Key and Wallet Management .

The wallet is a critical component and should be backed up in a secure location (different to the location where the database backups are stored!). If the wallet containing the master keys is lost or if its password is forgotten then the encrypted data will not be accessible anymore.  Make sure that the wallet is backed up in the following scenarios: 

Immediately after creating it.
1. When regenerating the master key
2. When backing up the database. Make sure that the wallet backup is not stored in the same location with the database backup
3. Before changing the wallet password

Make sure that the wallet password is complex but at the same time easy to remember. When it is possible split knowledge about wallet password .If needed, the wallet password can be changed within Oracle Wallet Manager or with the following command using orapki  (starting from 11.1.0.7):

c:\> orapki wallet change_pwd -wallet <wallet_location>

Oracle recommends that the wallet files are placed outside of the $ORACLE_BASE directory to avoid having them backed up to same location as other Oracle files. Furthermore it is recommended to restrict the access to the directory and to the wallet files to avoid accidental removals. 

we can identify encrypted tablespaces in the database by using the below query :
SQL>SELECT ts.name, es.encryptedts, es.encryptionalg FROM v$tablespace ts
INNER JOIN v$encrypted_tablespaces es  ON es.ts# = ts.ts# ; 

The following are supported with encrypted tablespaces
  • Move table back and forth between encrypted tablespace and non-encrypted tablespace .
  • Datapump is supported to export/import encrypted content/tablespaces. 
  • Transportable tablespace is supported using datapump. 
The following are not supported with encrypted tablespaces
  • Tablespace encryption cannot be used for SYSTEM, SYSAUX, UNDO and TEMP tablespaces .
  • Existing tablespace cannot be encrypted . 
  • Traditional export/import utilities for encrypted content.
To check the example  of the TDE click here

Enjoy     :-) 


Thursday, December 1, 2011

Enable block change tracking in oracle 11g


The block change tracking (BCT) feature for incremental backups improves incremental backup performance by recording changed blocks in each datafile in a block change tracking file. This file is a small binary file called block change tracking (BCT) file stored in the database area. RMAN tracks changed blocks as redo is generated.If we enable block change tracking, then RMAN uses the change tracking file(BCT)  to identify changed blocks for an incremental backup, thus avoiding the need to scan every block in the datafile. RMAN only uses block change tracking when the incremental level is greater than 0 because a level 0 incremental backup includes all blocks. 

Enable block change tracking (BCT) 

SQL> alter database enable block change tracking  using file 'C:\app\neerajs\admin\noida\bct.dbf' ;

When data blocks change, shadow processes track the changed blocks in a private area of memory at the same time they generate redo . When a commit is issued, the BCT information is copied to a shared area in Large Pool called 'CTWR dba buffer' . At the checkpoint, a new background process, Change Tracking Writer (CTWR) , writes the information from the buffer to the change-tracking file . If contention for space in the CTWR dba buffer occurs, a wait event called , 'Block Change Tracking Buffer Space'  is recorded. Several causes for this wait event are poor I/O performance on the disk where the change-tracking file resides , or the CTWR dba buffer is too small to record the number of concurrent block changes .By default, the CTWR process is disabled because it can introduce some minimal performance overhead on the database. 

The v$block_change_tracking  views contains the name and size of the block change tracking file plus the status of change tracking: We can check by the below command :  

SQL> select filename, status, bytes from v$block_change_tracking;

To check whether the block change tracking file is being used or not, use the below command .

SQL> select  file#,  avg(datafile_blocks), avg(blocks_read),  avg(blocks_read/datafile_blocks) * 100 as  "% read for backup"  from v$backup_datafile  where incremental_level > 0  and  used_change_tracking = 'YES'  group by file#   order by file# ;

To disable Block Change Tracking (BCT)   issue the below command
SQL> alter database disable block change tracking  ;



Enjoy    :-)