Friday, December 9, 2011

ORA-39083: Object type INDEX_STATISTICS failed ,ORA-01403: no data found

Once while importing , the above error occurred . I have  generated the sql file of this import operation. After analsing and googling , i found that this error may occurred  because any one index  is missing, for some reason, that why  the impdp utility fails importing the statistics associated to that particular missing index. In this case, the problem is generated because expdp utility puts the CREATE INDEX statements in wrong order into the dumpfile . Below are the details .

C:\>impdp system/xxxx@orcl  DUMPFILE=SHAIK72_01.dmp LOGFILE=SHAIK72_imp.log remap_schema= SHAIK72:SHAIK72
Import: Release 11.2.0.1.0 - Production on Fri Dec 9 10:31:59 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/******** DUMPFILE=SHAIK72_01.dmp LOGFILE=SHAIK72_imp.log remap_schema=
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "SHAIK72"."AD_ARCHIVE"                      188.3 MB    1785 rows
. . imported "SHAIK72"."AD_ATTACHMENT"                   81.80 MB     240 rows
. . imported "SHAIK72"."T_REPORTSTATEMENT"               60.03 MB 1012428 rows
. . imported "SHAIK72"."AD_QUERYLOG"                     25.75 MB   98396 rows
. . imported "SHAIK72"."FACT_ACCT"                       25.82 MB  123994 rows
. . imported "SHAIK72"."T_TRIALBALANCE"                  13.90 MB   59977 rows
. . imported "SHAIK72"."AD_WF_ACTIVITY"                  18.46 MB  110882 rows
. . imported "SHAIK72"."AD_WF_EVENTAUDIT"                18.30 MB  110882 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-01403: no data found
ORA-01403: no data found
Failing sql is :
DECLARE I_N VARCHAR2(60);   I_O VARCHAR2(60);   c DBMS_METADATA.T_VAR_COLL;   df varchar2(21) := 'YYYY-MM-DD:HH24:MI:SS'; BEGIN  DELETE FROM "SYS"."IMPDP_STATS";   c(1) :=   DBMS_METADATA.GET_STAT_COLNAME('SHAIK72','C_BPARTNER','NULL ',NULL,0);  DBMS_METADATA.GET_STAT_INDNAME('SHAIK72','C_BPARTNER',c,1,i_o,i_n);   INSERT INTO "SYS"."IMPDP_STATS" (type,version,flags,c1,c2,c
Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
ORA-39082: Object type VIEW:"SHAIK72"."DSI_V_RETAILER_STRUCT" created with compilation warnings
ORA-39082: Object type VIEW:"SHAIK72"."DSI_V_SHIPMENT_DETAILS" created with compilation warnings
ORA-39082: Object type VIEW:"SHAIK72"."M_STORAGE_V" created with compilation warnings
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 6 error(s) at 10:59:21


Here are the workaroud to solve this issue .

1.) Use traditional export/import utility .

2.) First import the dump by excluding the indexes and later import the same dump by including the indexes as below :  
a.)  impdp system/xxxx DUMPFILE=SHAIK72_01.dmp LOGFILE=SHAIK72_imp.log remap_schema= SHAIK72:SHAIK72  exclude=indexes and then

b.)  impdp system/xxxx  DUMPFILE=SHAIK72_01.dmp LOGFILE=SHAIK72_imp1.log remap_schema= SHAIK72:SHAIK72  include=indexes 

3.)  On googling , i found that using parameter  EXCLUDE=STATISTICS may also solve this issue . Check the below link  for this parameter .
http://dbhk.wordpress.com/category/impdp/


Enjoy     :-) 

Wednesday, December 7, 2011

ORA-00600: internal error code, arguments: [kkpo_rcinfo_defstg:objnotfound],[1002300]

Once we have upgraded our database to fix some issue. Everything goes well . On the very next day when we are dropping the user we get the ORA-00600 internal error . The ORA-00600 error has the following arguments

SQL> drop user test cascade ;
ERROR at line 1:
ORA-00600: internal error code, arguments: [kkpo_rcinfo_defstg:objnotfound],[1002300], [], [], [], [], [], [], [], [], [], []

ORA-600  is an internals errors and occur due to various reasons. This issue has been identified as Bug:10373381. In my case this is due to upgradation from lower version to higher version . I have checked the meta-link and found the interesting workaround . The solve this error perform the below steps : 

1.)  Apply the 11.2.0.2.2 PSU Patch: 11724916 or higher :  

If on Exadata, download and apply the 11.2.0.2. Bundle Patch 6 Patch:12326685 or higher (See NOTE: 1314319.1 for latest 11.2.0.2 Exadata patches).
If on Windows, apply the 11.2.0.2 Patch 3 or higher (See NOTE:1114533.1 for latest 11.2.0.2 patch)
32-Bit Patch:11731183
64-Bit (x64) Patch:11731184

2.)  If available for our platform and version, download and apply Patch 10373381

3.)  As a workaround, restore and perform a direct upgrade from 10.2.0.5 to the 11.2.0.2 release.



Enjoy    :-) 


Saturday, December 3, 2011

Transparent Data Encryption in Oracle 11g

Oracle Transparent Data Encryption (TDE) enables the organizations to encrypt sensitive application data on storage media completely transparent to the application. TDE addresses encryption requirements associated with public and private privacy and security regulations such as PCI DSS. TDE column encryption was introduced in Oracle Database 10g Release 2, enabling encryption of table columns containing sensitive information. The TDE tablespace encryption and the support for hardware security modules (HSM) were introduced in Oracle Database 11gR1.

TDE is protecting the data at rest. It is encrypting the data in the datafiles so that in case they are obtained by other parties it will not be possible to access the clear text data.  TDE cannot be used to obfuscate the data for the users who have privileges to access the tables. In the databases where TDE is configured any user who has access on an encrypted table will be able to see the data in clear text because Oracle will transparently decrypt the data for any user having the necessary privileges.
TDE is using a two tier encryption key architecture consisting of  :

  • a master encryption key - this is the encryption key used to encrypt secondary keys used for column encryption and tablespace encryption.
  • one or more table and/or tablespace keys - these are the keys that are used to encrypt one or more specific columns or the keys used to encrypt  tablespaces. There is only one table key regardless of the number of encrypted columns in a table and it will be stored in the data dictionary. The tablespace key is stored in the header of each datafile of the encrypted tablespace. 

The table and tablespace keys are encrypted using the master key. The master key is stored in an external security module (ESM) that can be one of the following:

  • an Oracle Wallet - a secure container outside of the database. It is encrypted with a password. 
  • a Hardware Security Module (HSM) - a device used to secure keys and perform cryptographic operations. 

To start using  TDE the following operations have to be performed:

1.) Make sure that the wallet location exists. If a non default wallet location must be used then specify it in the sqlnet.ora file :

ENCRYPTION_WALLET_LOCATION =
   (SOURCE = (METHOD = FILE)
     (METHOD_DATA =
      (DIRECTORY = C:\app\neerajs\admin\orcl\wallet)
     )
   )

Note : The default encryption wallet location is $ORACLE_BASE/admin/<global_db_name>/wallet. If we want to let Oracle manage a wallet in the default location then there is no need to set the ENCRYPTION_WALLET_LOCATION parameter in sqlnet.ora.

It is important to check that the location specified in sqlnet.ora or the default location exists and can be read/written by the Oracle processes.

2.) Generate a master key :

SQL> alter system set encryption key identified by "wallet_password" ;
system altered

This command will do the following :

A.) If there is no wallet currently in the wallet location then a new wallet with the password "wallet_password" will be generated. The password is enclosed in double quotes to preserve the case of the characters. If the double quotes are not used then the characters of the password will be all in upper case. This command will also cause the new wallet to be opened and ready for use.

B.) A new master key will be generated and will be written to the wallet. This newly generated master key will become the active master key. The old master keys (if there were any) will still be kept in the wallet but they will not be active. They are kept there to be used when decrypting data that was previously encrypted using them .

To see the status of an wallet run the following query:

SQL> select * from v$encryption_wallet;
WRL_TYPE             WRL_PARAMETER                      STATUS
-----------    ------------------------------         -----------
file                C:\app\neerajs\admin\orcl\wallet         OPEN

3.)  Enable encryption for a column or for an entire tablespace:

3.1) Create a table by specifying the encrypt option:

SQL> create table test(col1 number, col2 varchar2(100) encrypt using 'AES256' NO SALT) ;

3.2) Encrypt the column(s) of an existing table :

SQL> alter  table  test  modify( col2 encrypt SALT ) ;

Note : If the table has many rows then this operation might take some time since all the values stored in col2 must be replaced by encrypted strings. If the access to the table during this operations is needed then useOnline Table Redefinition

3.3)  Create an encrypted tablespace : The syntax is the same as creating a normal tablespace except for two clauses:
  • We specify the encryption algorithm – in this case ‘AES256′. If we do not specify this, it will default to ‘AES128′. At the time of tablespace creation specify the encryption and default storage clause.

Define the encryption algorithem as " using 'algorithm' " along with the encryption clause. We can use the following algorithms while creating an encrypted tablespace.
AES128
AES192
AED256
3DES168
If we don't specify any algorithm with the encryption clause it will use AES128 as default.

  •  The DEFAULT STORAGE (ENCRYPT) clause.
SQL> create tablespace encryptedtbs  datafile 'C:\app\neerajs\oradata\orcl\encryptedtbs01.dbf'  size 100M encryption using  'AES256'  default storage(encrypt) ;

Note: An existing  non encrypted tablespace cannot be encrypted. If we must encrypt the data from an entire tablespace then create a new encrypted tablespace and then move the data from the old tablespace to the new one TDE Master Key and Wallet Management .

The wallet is a critical component and should be backed up in a secure location (different to the location where the database backups are stored!). If the wallet containing the master keys is lost or if its password is forgotten then the encrypted data will not be accessible anymore.  Make sure that the wallet is backed up in the following scenarios: 

Immediately after creating it.
1. When regenerating the master key
2. When backing up the database. Make sure that the wallet backup is not stored in the same location with the database backup
3. Before changing the wallet password

Make sure that the wallet password is complex but at the same time easy to remember. When it is possible split knowledge about wallet password .If needed, the wallet password can be changed within Oracle Wallet Manager or with the following command using orapki  (starting from 11.1.0.7):

c:\> orapki wallet change_pwd -wallet <wallet_location>

Oracle recommends that the wallet files are placed outside of the $ORACLE_BASE directory to avoid having them backed up to same location as other Oracle files. Furthermore it is recommended to restrict the access to the directory and to the wallet files to avoid accidental removals. 

we can identify encrypted tablespaces in the database by using the below query :
SQL>SELECT ts.name, es.encryptedts, es.encryptionalg FROM v$tablespace ts
INNER JOIN v$encrypted_tablespaces es  ON es.ts# = ts.ts# ; 

The following are supported with encrypted tablespaces
  • Move table back and forth between encrypted tablespace and non-encrypted tablespace .
  • Datapump is supported to export/import encrypted content/tablespaces. 
  • Transportable tablespace is supported using datapump. 
The following are not supported with encrypted tablespaces
  • Tablespace encryption cannot be used for SYSTEM, SYSAUX, UNDO and TEMP tablespaces .
  • Existing tablespace cannot be encrypted . 
  • Traditional export/import utilities for encrypted content.
To check the example  of the TDE click here

Enjoy     :-) 


Thursday, December 1, 2011

Enable block change tracking in oracle 11g


The block change tracking (BCT) feature for incremental backups improves incremental backup performance by recording changed blocks in each datafile in a block change tracking file. This file is a small binary file called block change tracking (BCT) file stored in the database area. RMAN tracks changed blocks as redo is generated.If we enable block change tracking, then RMAN uses the change tracking file(BCT)  to identify changed blocks for an incremental backup, thus avoiding the need to scan every block in the datafile. RMAN only uses block change tracking when the incremental level is greater than 0 because a level 0 incremental backup includes all blocks. 

Enable block change tracking (BCT) 

SQL> alter database enable block change tracking  using file 'C:\app\neerajs\admin\noida\bct.dbf' ;

When data blocks change, shadow processes track the changed blocks in a private area of memory at the same time they generate redo . When a commit is issued, the BCT information is copied to a shared area in Large Pool called 'CTWR dba buffer' . At the checkpoint, a new background process, Change Tracking Writer (CTWR) , writes the information from the buffer to the change-tracking file . If contention for space in the CTWR dba buffer occurs, a wait event called , 'Block Change Tracking Buffer Space'  is recorded. Several causes for this wait event are poor I/O performance on the disk where the change-tracking file resides , or the CTWR dba buffer is too small to record the number of concurrent block changes .By default, the CTWR process is disabled because it can introduce some minimal performance overhead on the database. 

The v$block_change_tracking  views contains the name and size of the block change tracking file plus the status of change tracking: We can check by the below command :  

SQL> select filename, status, bytes from v$block_change_tracking;

To check whether the block change tracking file is being used or not, use the below command .

SQL> select  file#,  avg(datafile_blocks), avg(blocks_read),  avg(blocks_read/datafile_blocks) * 100 as  "% read for backup"  from v$backup_datafile  where incremental_level > 0  and  used_change_tracking = 'YES'  group by file#   order by file# ;

To disable Block Change Tracking (BCT)   issue the below command
SQL> alter database disable block change tracking  ;



Enjoy    :-) 


Wednesday, November 30, 2011

ORA-39082 ".... created with compilation warnings" while Importing



ORA-39082   generally  occur during  the  import . The error  message  states that  the  object in  the  SQL statement  following  this  error was  created with  compilation errors. If  this  error  occurred  for a  view,  it  is  possible  that  the base  table of  the view  was  missing . Here  is  an scenario  of  ora-39082 error ....

C:\>impdp  system/xxxx@xe  remap_schema=shaik9sep:shaik9sep11g dumpfile=SHAIK9SEP10G.DMP logfile=shaik9sep10g_import.log 

Import: Release 10.2.0.1.0 - Production on Wednesday, 30 November, 2011 14:40:06
Copyright (c) 2003, 2005, Oracle.  All rights reserved.
Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
Master table "SYSTEM"."SYS_IMPORT_FULL_04" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_04":  system/********@xe  remap_schema=shaik9sep:shaik9sep11g dumpfile=SHAIK9SEP10G.DMP logfile=shaik9sep10g_import.log
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"SHAIK9SEP11G" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "SHAIK9SEP11G"."AD_ARCHIVE"                       188.3 MB    1785 rows
. . imported "SHAIK9SEP11G"."T_REPORTSTATEMENT"          54.13 MB    952472 rows
. . imported "SHAIK9SEP11G"."FACT_ACCT"                        23.79 MB   115203 rows
. . imported "SHAIK9SEP11G"."AD_QUERYLOG"                     20.35 MB   78411 rows
. . imported "SHAIK9SEP11G"."T_TRIALBALANCE"                 12.54 MB   55310 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
ORA-39082: Object type ALTER_FUNCTION:"SHAIK9SEP11G"."DSI_FUNC_PRODUCTREP" created with compilation warnings
ORA-39082: Object type ALTER_FUNCTION:"SHAIK9SEP11G"."INVOICEOPEN" created with compilation warnings
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
ORA-39082: Object type ALTER_PROCEDURE:"SHAIK9SEP11G"."T_INVENTYVALUE_CREATE" created with compilation warnings
ORA-39082: Object type ALTER_PROCEDURE:"SHAIK9SEP11G"."AD_SYNCHRONIZE" created with compilation warnings
Processing object type SCHEMA_EXPORT/VIEW/VIEW
ORA-39082: Object type VIEW:"SHAIK9SEP11G"."M_STORAGE_V" created with compilation warning

All the above error ORA-30082 is a warning . This error occurs due to improper or re-order the sequence of importing the objects or due to the dependency on others objects . For example  , in above case data pump import create procedures before views, if our procedure have dependency on views then we will have the ORA-39082 compilation errors . There are various ways to solve this issues . 

1.)  Run  utlrp.sql  to recompile all invalid objects within the database after the import is complete. This script is in the $ORACLE_HOME\rdbms\admin  directory or alternatively we can use the built-in  DBMS_RECOMP package . This will usually clean up all the invalid objects. utlrp.sql  will compile objects in the database across schemas. In  case of  Re-mapping objects from one schema to another and utlrp.sql  won't be able to compile them .

2.) After the import is completed, recompile the every errors . This is useful when you have few errors. The below command are used to recompile the objects as 

SQL> ALTER PACKAGE <SchemaName>.<PackageName> COMPILE;
SQL> ALTER PACKAGE <SchemaName>.<PackageName> COMPILE BODY;
SQL> ALTER PROCEDURE my_procedure COMPILE;
SQL> ALTER FUNCTION my_function COMPILE;
SQL> ALTER TRIGGER my_trigger COMPILE;
SQL> ALTER VIEW my_view COMPILE;
SQL> EXEC DBMS_UTILITY.compile_schema(schema => 'shaik9sep11g') ; or 
SQL> EXEC UTL_RECOMP.recomp_serial('shaik9sep11g') ;

In case of synonym, we need to recreate the synonym.

3.) Use SQLFILE option from impdp to generate the DDL from the export dump and replace the schema name globally , edit and execute the script from sqlplus. This should resolve most of the errors. If we still have errors, proceed  with utlrp.sql. This is the one of good option to deal with this type of error .

4.) There are Bugs which return similar error during import( i.e, impdp). Check metalink  ORA-39082 When Importing Wrapped Procedures [ID 460267.1]

Check the below link for more info and examples about this errors  : 
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:286816015990



Enjoy   :-)