Saturday, December 31, 2011

A Very Very Happy New Years To U All



Receive my simple gift of LOVE
Wrapped with SINCERITY
Tied with CARE &
Sealed with BLESSINGS
2 Keep u HAPPY & SAFE all the life long.




Have a lucky and wonderful 2012
A Happy New Year! Grant that I
May bring no tear to any eye
When this New Year in time shall end
Let it be said I’ve played the friend
Have lived and loved and labored here
And made of it a happy year.






















My wishes for you, great start for Jan,
Love for Feb, peace for March,
No worries for April, fun for May,
Joy for June to Nov,happiness for Dec.



Have Fun and Enjoy  
J J J


Thursday, December 29, 2011

How to Exclude Tablespace from Rman Backup ?

Sometimes we  may want to omit a  specified  tablespace  from  part of  the regular  backup schedule. Suppose  in  a tablespace  the data don't change  or  the tablespace  contains test  data only or  sometimes  a scenario may occur when we are  clonning  the database  using   the  rman backup where we  do not  need all  the schemas . In such cases, either we  might change our  backup  strategy or  skip the certain tablespace  in  the database. 

To overcome this type of issue, we can configure the exclude option to exclude the specified tablespace from the Backup Database command . Though we can generally skip the tablespace during the Backup Database command but only when the tablespace is offline or readonly .The exclusion condition applies to any data files that we add to this tablespace in the future. Below are the steps to configure the Exclude  option 

C:\> rman  target  /
Recovery Manager: Release 11.2.0.1.0 - Production on Wed Dec 28 18:58:05 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
connected to target database: NOIDA (DBID=1523131116)


RMAN> configure exclude for tablespace "EXAMPLE" ;
Tablespace EXAMPLE will be excluded from future whole database backups
new RMAN configuration parameters are successfully stored

To check this parameter , use the below command 

RMAN> show exclude ;
RMAN configuration parameters for database with db_unique_name NOIDA are:
CONFIGURE EXCLUDE FOR TABLESPACE 'EXAMPLE';

If  "exclude" option is configured ,  even then we can backup the excluded tablespsace by explicitly specifying them in a Backup command or by specifying the NOEXCLUDE option on a "Backup Database" command as 

RMAN> backup database noexclude ;
or
RMAN> backup tablespace example ;
Starting backup at 29-DEC-11
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=C:\APP\NEERAJS\ORADATA\NOIDA\EXAMPLE01.DBF
channel ORA_DISK_1: starting piece 1 at 29-DEC-11
channel ORA_DISK_1: finished piece 1 at 29-DEC-11
piece handle=E:\RMAN_BACKUP\0UMVEC3G_1_1 tag=TAG20111229T150832 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
Finished backup at 29-DEC-11

We can disable the exclusion feature tablespace example as : 

RMAN> configure exclude for tablespace example clear ;
Tablespace EXAMPLE will be included in future whole database backups
old RMAN configuration parameters are successfully deleted


RMAN> show exclude ;
RMAN configuration parameters for database with db_unique_name NOIDA are :
RMAN configuration has no stored or default parameters


In order to skip  READONLY  and  OFFLINE  tablespace we can issue backup database command as,
RMAN>backup database skip readonly, skip offline ;


Enjoy       :-)

Wednesday, December 28, 2011

Configure Rman Backupset Compression


RMAN compresses the backup set contents before writing them to disk. No extra uncompression steps are required during recovery when we use RMAN compression. RMAN has two types of compression:  

1.) Null Compression     and
2.) Unused Block Compression


1.) Null Compression :  When backing up datafiles into backup sets, RMAN does not back up the contents of data blocks that have never been allocated. This means RMAN will never backup the blocks that are ever used. For example: We have a tablespace having one datafile of size 100MB and out of 100MB only 50 MB is used. Then RMAN will backup only 50MB. 

2.) Unused Block Compression :  RMAN skips the blocks that do not currently contain data and this is called Unused Block Compression. RMAN now creates more compact backups of datafiles, by skipping datafile blocks that are not currently used to store data. No extra action is required on the part of the DBA to use this feature. 

Example :  We  have a tablespace having one datafile of size 100MB and out of 100MB, 50MB is used by the user tables. Then user dropped a table belonging to that tablespace which was of 25MB, with Unused Block Compression only 25MB of the files is backed up. In this example if null compression is used then it would have backed up 50MB because Null Compression will consider the blocks that are formatted/ever used. 


Binary Compression : Binary Compression can be done by specifying "AS COMPRESSED" clause in backup command, this compression is called as binary compression. RMAN can apply a binary compression algorithm as it writes data to backup sets. This compression is similar to the compression provided by many tape vendors when backing up data to tape. But we cannot give exact percentage of compression. This binary compression algorithm can greatly reduce the space required for disk backup storage. It is typically 2x to 4x, and greater for text-intensive databases.  The command to take the compressed backup :

RMAN> backup as compressed backupset database ;


There is no special command to restore database from the compressed backupsets. The restore command will be the same as with uncompressed backups.The restore from the compressed backpuset will take more time than uncompressed backupsets.
To use rman compression option, we can run the following RMAN commands to configure compression 


RMAN> CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET;
followed by ..
RMAN> CONFIGURE COMPRESSION ALGORITHM ‘HIGH’ ; 
or
RMAN> CONFIGURE COMPRESSION ALGORITHM ‘MEDIUM’ ; 
or
RMAN> CONFIGURE COMPRESSION ALGORITHM ‘LOW’ ; 
or
RMAN> CONFIGURE COMPRESSION ALGORITHM ‘BASIC’ ; 


Oracle 11g added several compression algorithms to compress data. They can be used for compressing tables, LOBs , compressed data pump exports or even RMAN backups. Unfortunately for some compression algorithms we need to purchase the “Advanced Compression Option”. The following table lists the available RMAN compression options, the most likely compression algorithm being used and states if an additional license is required:





The compression levels are BASIC, LOW, MEDIUM and HIGH and each affords a trade off related to backup throughput and the degree of compression afforded. If we have enabled the Oracle Database 11g Release 2 Advanced Compression Option, then we can choose from the following compression levels :

  • HIGH - Best suited for backups over slower networks where the limiting factor is network speed
  • MEDIUM - Recommended for most environments. Good combination of compression ratios and speed
  • LOW - Least impact on backup throughput and suited for environments where CPU resources are the limiting factor.

Note:  The compression ratio generally increases from LOW to HIGH, with a trade-off of potentially consuming more CPU resources.
We can check the compression level by using the command .


SQL> select  *  from V$RMAN_COMPRESSION_ALGORITHM;
Output : 

I found a good scenario on net related to compression level having statistics about the this compression level . Here is the scenario The environment being used was a freshly created 11g Release 2 database with some smaller tables in it. The total sum of all segments equals to 4.88 GB. All database data files excluding the temporary ones are 7.3 GB total. Excluding temporary and undo data files total size equates to 5.9 GB.


Here is the test results displays of the compression level :
Test results

As we  can see  from the  table  HIGH compression  does an  incredibly high  load on the  machine and  take extremely long but produces the smallest backup set size.Surprisingly BASIC compression (which is available without advanced compression license) does a good job as well and produces the second smallest backup set but takes nearly as long as doing uncompressed backups. But in other environment with faster CPUs this will change . 

In the test environment used either LOW or MEDIUM compression seems to be the best choice. Due to the fact MEDIUM produces a approx. 15% smaller backup set but taking only a few seconds more time to complete i would rank MEDIUM on 1st and LOW on second.

Finally we came to the conclusion that stronger the compression the smaller the backup size but the more CPU-intensive the backup is. If we do not have the advanced compression license BASIC compression will produce reasonable compression rates at moderate Load. If  we have the licence we have a lot more options to suit our needs.

If we want to test and optimize our rman backup, we basically have three major switches to play with :

  • compression algorithmn
  • rman parallelism and
  • data transfer mechanism (SAN or Ethernet [this includes: iSCSI, NFS, CIFS, Backup to tape over Ethernet])


Enjoy     J J J


Oracle Advanced Compression

Oracle Advanced Compression and Oracle Database 11g Release 2 helps manage more data in a cost-effective manner. With data volumes, on average, tripling every two years, Oracle Advanced Compression delivers compression rates of 2-4x across all types of data and applications.storage savings from compression will cascade throughout the data center, reducing network traffic and data backups as well. And by reading fewer blocks off disk, Oracle Advanced Compression also improves query performance.

Oracle Advanced Compression is an option of the Oracle 11g database (separately licensed) that allows data in the database to be compressed. Oracle Advanced Compression offers the following advantages:


1.) OLTP Compression  : It allows structured and unstructured data to be compressed on insert,update and delete  operations.The following are features : 
  • New compression algorithm uses deferred or batched approach
  • Data is inserted as is without compression until  PCTFREE  value is reached.
  • Compression of data starts once PCTFREE threshold is reached
  • Can be enabled at  table, partition or tablespace level
  • No need of decompressing the data during reads
  • Recommended for low update activity tables

2.) Data Pump Compression   : In Data Pump, the compression of metadata was introduced in 10g and compression of "data" was introduced in 11g.This covers the following features : 
  • Both are Inline operation
  • Save on storage allocation
  • No need to uncompress before Import
  • Implemented with COMPRESSION attribute, Values supported are ALL, DATA_ONLY, METADATA_ONLY . 

3.) 
Data guard Compression : It includes the following features : 
  • Redo is compressed as it is transmitted over a network .
  • Helps efficiently utilize network bandwidth when data guard  is across data centers
  • Faster re-synchronization of Data guard  during gap resolution.
  • Recommended for low network bandwidth .
  • Implemented with attribute “COMPRESSION”  of initialization parameter log_archive_dest_n

.4.) RMAN Backup Compression  It compresses the RMAN backups.The followinf features are
  • Supports compression of backups using "ZLIB"  algorithm .
  • Faster compression and low CPU utilization compared to default BZIP2 (10g) .
  • Low compression ratio  compared to BZIP2 .
  • Implement  with  CONFIGURE COMPRESSION ALGORITHM  ‘value’ command where value can be High , Medium(ZLIB) and Low(LZO) .


The Oracle Database 11g Advanced Compression option introduces a comprehensive set of compression capabilities to help customers maximize resource utilization and reduce costs. It allows IT administrators to significantly reduce their overall database storage footprint by enabling compression for all types of data – be it relational (table), unstructured (file), or backup data . Although storage cost savings are often seen as the most tangible benefit of compression, innovative technologies included in the Advanced Compression Option are designed to reduce resource requirements and technology costs for all components of our IT infrastructure, including memory and network bandwidth .

The benefits of compression are manyfold
1.) Reduction of disk space used for storage .
2.) Reduction in I/O bandwidth requirements .
3.) Faster full table scans .
4.) Lower server memory usage.


Enjoy    J J J


Tuesday, December 27, 2011

Change Database Character Set using CSSCAN


CSSCAN (Database Character Set Scanner) is a SCAN tool that allows us to see the impact of a database character set change or assist us to correct an incorrect database nls_characterset setup. Data scanning identifies the amount of effort required to migrate data into the new character encoding scheme before changing the database character set. This information helps to determine the best approach for converting the database character set.

Before altering the character set of a database, check the convertibility of the data before converting. Character set conversions can cause data loss or data corruption. The Character Set Scanner utility provides the below two  features: 

1.) Convertibility check of existing data and potential issues. The Scanner checks all character data in the database including the data dictionary and tests for the effects and problems of changing the character set encoding (characterset). At the end of the scan, it generates a summary and exception report of the database scan.

2.) Csscan allows  also us to do a check if there is no data in the database that is  incorrectly stored.

The CSALTER script is part of the Database Character Set Scanner utility. The CSALTER script is the most straightforward way to migrate a character set, but it can be used only if all of the schema data is a strict subset of the new character set. Each and every character in the current character set is available in the new character set and has the same code point value in the new character set.

With the strict superset criteria in mind, only the metadata is converted to the new character set by the CSALTER script, with the following exception: the CSALTER script performs data conversion only on CLOB columns in the data dictionary and sample schemas that have been created by Oracle. CLOB columns that users have created may need to be handled separately

Note it's possible to run Csscan from a client, but this client needs to be the same base version as the database home.(i.e, oracle 11g server need oracle 11g client) .
To change the database character set, perform the following steps :


STEP 1 : Remove the invalid objects and purge the recyclebin , then take a  full backup of the database.

STEP 2 : Install the CSS utility if not install . We will get error css-00107 if css utility is not install .Install the CSS utility by running the csminst.sql script which is found in $ORACLE_HOME\rdbms\admin .

STEP 3 : Run the Database Character Set Scanner utility as
set the oracle_sid and run as
CCSSCAN  /AS SYSDBA  FULL=Y


STEP 4 : Run the CSALTER script.This script is in $ORACLE_HOME\rdbms\admin  folder .
i.>   shut down 
ii.>  startup restrict 
iii.> @csalter.plb
iv.> shut immediate
v.>  startup  

Note :  
i.> The CSALTER script does not perform any user data conversion. It only changes the character set metadata in the data dictionary.Thus, after the CSALTER operation, Oracle will behave as if the database was created using the new character set.

ii.> Changing the database characterset is not an easy task . It is quite tricky tasks and may face errors which need the oracle support . So,it's better to raise as SR for this task and involve the  oracle support .  

There  are no of links on meta-link .Check the below meta-links .
Note: 555823.1 – Changing US7ASCII or WE8ISO8859P1 to WE8MSWIN1252
Note: 444701.1 – Csscan output explained
Note: 745809.1 – Installing and configuring Csscan in 10g and 11g (Database Character Set Scanner)
Note: 225912.1 – Changing the Database Character Set ( NLS_CHARACTERSET ) [ID 225912.1]




Enjoy   J J J


Thursday, December 22, 2011

Append Date & time to File name in Window Batch file


Sometimes it may be required to schedule the same task twice a days . In such cases, we have to create two batch file and scheduling them separately because the generated name may overwrites or throws errors . To overcome from this issue , we can schedule the same task with single batch file. Suppose we have to take the logical backup of a schema(say scott) twice a days . We can do this by simply appending the date and time to the dump file name .

As in case of Linux /Unix systems, shell scripting is very liberal with variables and we can define according to ourself . e.g;
expdate=`date ‘+%d%m%Y’`
dat=`date ‘+%m%d%y %H:%M:%S’`

And then go onto define in our script as

./expdp system/xxxx  dumpfile=scott_$expdate.dmp logfile=scott_log_$expdate.log schemas=scott

But on windows machine ,we use the Date and Time function. The Date and Time function are as below  :

Date:  %date:~4,2%-%date:~7,2%-%date:~12,2%
Time:  %time:~0,2%-%time:~3,2%-%time:~6,2%  

This above function can be use to generate the unique dumpfile name. Below is the Demo of this function .

c:\> exp system/ramtech@noida  owner=scott  file=c:\scott_%date:~4,2%-%date:~7,2%-%date:~12,2%-%time:~0,2%-%time:~3,2%-%time:~6,2%.dmp log=c:\scottlog_%date:~4,2%-%date:~7,2%-%date:~12,2%-%time:~3,2%-%time:~6,2%.log    

The file name generated  i.e, dump file name is 'scott_12-22-11-12-04-31.dmp'  and log file name is 'scottlog_12-22-11-04-31.log'    where date is 12-22-11(12th-dec-2011) and time is 12-04-31(12hr:4min:31sec) .


Enjoy    J J J


Tuesday, December 20, 2011

What is CLOUD computing ??


I have read a  article on Cloud Computing and decided to post it. This article is written by one of the famous s/w engineer Gagan Kundra .

Cloud computing is a marketing term for technologies that provide computation, software, data access, and storage services that do not require end-user knowledge of the physical location and configuration of the system that delivers the services. A parallel to this concept can be drawn with the electricity grid, wherein end-users consume power without needing to understand the component devices or infrastructure required to provide the service.

Apple CEO Steve Jobs whenever someone comes on stage, the world is alarmed. Who let this time show little device that will change people's lives. Why is Apple the company, who gave us a iPod, Macintosh and iPhone are such tools. Jobs done something like this again. The only difference was that he did not have any tools. Just talked about a similar service to the people on the entire data and information from different devices can. Aiclaud Apple is the name of the service. But it's not actually Apple, but shows the power of cloud computing. Service's name may be, whether the service to be Apple, or Google, or Microsoft, but cloud computing in the coming days will sputter. Tech - pundits say that it will change our lives. Companies like Infosys have been predicted that almost all the work in the coming days the world will be on clouds. Just remember that no water in these clouds, but digital data - like - is filled with such information and other associated material. And the clouds in the sky, but on a giant computer - the server is called - are found.

Cloud computing is getting to be too much nowadays, but it is not a new thing. If you are on the Internet, so that - unconsciously you will also have to use it. I can not believe the e-mail as simple as taking the example of an Internet service. Most people e-mail on your computer, do not download, but leave it on the internet to see. Who your computer's hard drive to the e-mail wants to fill? But you never thought that this e-mail that you have left over the Internet or the Web, Where saving is kept, so you whenever you want your e-mail account, going to see it. The answer is simple - to cloud. You probably do not know, but nearly every time you leave something on the web, the data contained on these clouds. Whether it be a picture on Facebook, you - a video on the tube, or a new article on your blog. If he left you on the web, you are using cloud computing. You will have noticed that you leave the stuff on the internet, on any computer or cell phone or a tablet-like device can see. After all, your e-mail or Facebook Most office and home computer or your phone may get. This is the greatest power of cloud computing and specialty. It kept you on the Internet, its content or data from anywhere, anytime viewing and gives the ability to change.

 A few years ago was meant to use any software to download it to your computer or cellphone. Then the software could only be played on computers or cell phones. Today, via cloud computing could use the same software, with nothing to download, virtually any device - phone, tablet, computer or even from television. Consider that the cloud has taken the place of your computer. Now, Google's Google Docs, which runs entirely on the internet and gives you almost the same features you get with Microsoft Office, there is no need to download. It will remain on the files and documents, you can look at any computer or cellphone. and Google Docs is just an example. Dozens of world power and benefits of cloud computing companies trying to use it are given. Amazon's cloud service to their customers, keep your favorite songs have the chance. 

Microsoft has put many products on the cloud and its famous office software is also put on some cloud. And now Apple has also stepped into cloud computing. Aiclaud service people like him - look at the software and information and to use the facility will offer many tools. Interestingly, many are free of cloud computing service. To use Google Docs and Aiclaud you do not have light pockets at all. like - like an increasing number of software will run on the web, anyway - so we will run the digital cloud. Our work on certain devices, but will depend on our Internet connection. The Internet will be able to do or where we will be able to satisfy yourself. Whether we are or tablet computer, or cell phone, or TV, or game console, or to connect to the Internet even if the clock. Your work and be entertained, where would the internet. He will not be tied to a device. 


Enjoy    J  J J


Friday, December 16, 2011

Estimate Tablespace Growth In Oracle


I have already posted the script for tracking the growth of the Database .Sometimes it may be requires to find the growth of  each tablespace  . I  have google and find the script for tracking the growth of  each tablespace with date-wise. I thanks Hasan Shaharear  for sharing such a good scripts. This scripts donot shows the growth of system and sysaux tablespace growth rate. Below is the script .

SQL> set heading on
SQL> set linesize 5500
SQL> SELECT TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY') days
  2  , ts.tsname
  3  , max(round((tsu.tablespace_size* dt.block_size )/(1024*1024),2) ) cur_size_MB
  4  , max(round((tsu.tablespace_usedsize* dt.block_size )/(1024*1024),2)) usedsize_MB
  5  FROM DBA_HIST_TBSPC_SPACE_USAGE tsu
  6  , DBA_HIST_TABLESPACE_STAT ts
  7  , DBA_HIST_SNAPSHOT sp
  8  , DBA_TABLESPACES dt
  9  WHERE tsu.tablespace_id= ts.ts#
 10  AND tsu.snap_id = sp.snap_id
 11  AND ts.tsname = dt.tablespace_name
 12  AND ts.tsname NOT IN ('SYSAUX','SYSTEM')
 13  GROUP BY TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY'), ts.tsname
 14  ORDER BY ts.tsname, days;

DAYS                 TSNAME        CUR_SIZE_MB       USEDSIZE_MB
-------------      ---------       -------------     ---------------
07-12-2011         SDE                         2448              1813.94
08-12-2011         SDE                         2448              1813.94
09-12-2011         SDE                         2448              1813.94
10-12-2011         SDE                         2448              1813.94
11-12-2011         SDE                         2448              1815.94
12-12-2011         SDE                         2448              1815.94
13-12-2011         SDE                         2448              1816
14-12-2011         SDE                         2448              1816
15-12-2011         SDE                         2448              1816
16-12-2011         SDE                         2448              1816
07-12-2011         UNDOTBS1                4950              196.56
08-12-2011         UNDOTBS1                4950              302.56
09-12-2011         UNDOTBS1                4950              427.63
10-12-2011         UNDOTBS1                4950              348.56
11-12-2011         UNDOTBS1                6210              5317
12-12-2011         UNDOTBS1                6210              532.5
13-12-2011         UNDOTBS1                6210              388.56
14-12-2011         UNDOTBS1                6210              422.5
15-12-2011         UNDOTBS1                6210              585.63
16-12-2011         UNDOTBS1                6210              479.5
07-12-2011         USERS                      20480           19941.88
08-12-2011         USERS                      20480           20001.5
09-12-2011         USERS                      20480           20341.56
10-12-2011         USERS                      20480           20467.25
11-12-2011         USERS                      20480           20336.69
12-12-2011         USERS                      20480           20317.13
13-12-2011         USERS                      20480           20267.31
14-12-2011         USERS                      20480           20346.13
15-12-2011         USERS                      20480           20340.06
16-12-2011         USERS                      20480           20330.81
30 rows selected.


Enjoy    :-)




Tuesday, December 13, 2011

How to Reduce DB File Sequential Read Wait


DB File Sequential Read wait event occurs when we are trying to access data using index and oracle is waiting for the read of index block from disk to buffer cache to complete.  A sequential read is a single-block read.Single block I/Os are usually the result of using indexes. Rarely, full table scan calls could get truncated to a single block call due to extent boundaries, or buffers already present in the buffer cache.Db file sequential read wait events may also appear when undo blocks are read from disk in order to provide a consistent get(rarely).

To determine the actual object being waited can be checked by the p1, p2, p3 info in v$session_wait .  A sequential read is usually a single-block read, although it is possible to see sequential reads for more than one block (See P3). This wait may also be seen for reads from datafile headers (P2 indicates a file header read) ,where p1,p2 and p3 gives the the absolute file number ,the block being read ,and  the number of blocks (i.e, P3 should be 1) respectively. 

Block reads are fairly inevitable so the aim should be to minimise un-necessary IO. This is best achieved by good application design and efficient execution plans. Changes to execution plans can yield orders of magnitude changes in performance.Hence to reduce this wait event follow the below points .

1.) Tune Oracle - tuning SQL statements to reduce unnecessary I/O request is the only guaranteed way to reduce "db file sequential read" wait time.
2.) Tune Physical Devices - Distribute(stripe) the data on diferent disk to reduce the i/o . Logical distribution is useless. "Physical" I/O performance is only governed by "independency of devices".
3.) Faster Disk - Buy the faster disk to reduce the unnecessary I/O request .
4.) Increase db_block_buffers - A larger buffer cache can (not will, "might") help .


Reference :: From  Asktom site


Enjoy     :-) 


Cannot Load OCI.DLL : While Connecting


Sometimes the error "cannot load OCI.DLL" occur whenever we try to connect with the oracle database by using the third-party tools(i.e, toad,sqltools and others) or command prompt . This error may occur because of the following reason .

1.) The oci.dll error may occur because you have not set the correct ORACLE_HOME and path in environment variables . 
2.) It might be possible that the oci.dll file may be corrupt or may not exist on the correct path .
3.) May be possible that oci.dll may not be correct version. (e.g. 32bit s/w will load a 32bit DDL - we cannot for example use a 64bit DLL for a 32bit executable) . 

To solve this issue , consider the below points .

1.) Check the ORACLE_HOME and Path setting in the envirnoment variable.
2.) Check the correct location of the oci.dll path . The path of the oci.dll file is $ORACLE_HOME\bin\oci.dll
3.) Check the oci.dll correct version .

In my case , i am facing this issue because the $ORACLE_HOME is not correctly set in the environment variables . So setting the correct path in environment variables, we find not any error while connecting the  database .



Enjoy     :-) 


Friday, December 9, 2011

Dbms_Metadata Package : To Extract all DDL


There  are various method available to extract the DDL from the oracle database . We can extract the DDL either by  export/import utility , dbms_metadata package ,or by using others third party tools . The best ways to extract the DDL is by using the DBMS_METADATA pacakage if don't have any tools . Here we will discuss the various DDL extracting method to extract the DDL from the Oracle .

The DBMS_METADATA package provides a way to retrieve metadata from the database dictionary as XML or  DDL . We can retrive either a particular object type (such as a table, index, or procedure) or a heterogeneous collection of object types that form a logical unit (such as a database export or schema export) 

One of the very useful function i.e, GET_xxxx functions are used to extract the DDL .  The following GET_xxxx functions fetches the metadata for objects with a single call . They encapsulate calls to OPEN, SET_FILTER, and so on. The function we use depends on the characteristics of the object type and on whether we want XML or DDL.

1.) GET_DDL( )   :  This function is used to fetch named objects, especially schema objects (tables, views). They can also be used with nameless objects, such as resource_cost .

Snytax   :
DBMS_METADATA.GET_DDL (
object_type     IN VARCHAR2,
name            IN VARCHAR2,
schema          IN VARCHAR2 DEFAULT NULL,
version         IN VARCHAR2 DEFAULT 'COMPATIBLE',
model           IN VARCHAR2 DEFAULT 'ORACLE',
transform       IN VARCHAR2 DEFAULT 'DDL')
RETURN CLOB;

Example  : Here we will extract the DDL for the "employees"  table .

SQL> select dbms_metadata.get_ddl('TABLE', 'EMP')   from dual ;

DBMS_METADATA.GET_DDL('TABLE','EMP')
--------------------------------------------------------------------------------
   CREATE TABLE "ALEX"."EMP"
   (    "EMPNO" NUMBER(4,0),
        "ENAME" VARCHAR2(10),
        "JOB" VARCHAR2(9),
        "MGR" NUMBER(4,0),
        "HIREDATE" DATE,
        "SAL" NUMBER(7,2),
        "COMM" NUMBER(7,2),
        "DEPTNO" NUMBER(2,0),
         CONSTRAINT "EMP_DEPT_FK" FOREIGN KEY ("DEPTNO")
          REFERENCES "ALEX"."DEPT" ("DEPTNO") ENABLE
   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
  TABLESPACE "USERS"

Similary , we also extract the DDL of others objects like tablespace,views,sequence and others .

If we want only the DDL  not the storage clause and constraints ,then we need to modify some transformation parameters as

SQL> begin
  2     dbms_metadata.set_transform_param (dbms_metadata.session_transform,'STORAGE',false);
  3     dbms_metadata.set_transform_param (dbms_metadata.session_transform,'TABLESPACE',false);
  4     dbms_metadata.set_transform_param (dbms_metadata.session_transform,'SEGMENT_ATTRIBUTES', false);
  5     dbms_metadata.set_transform_param (dbms_metadata.session_transform,'REF_CONSTRAINTS', false);
  6     dbms_metadata.set_transform_param (dbms_metadata.session_transform,'CONSTRAINTS', false);
  7  end;
  8  /
 PL/SQL procedure successfully completed.

Now on running the same above statements give the following results .

SQL> select dbms_metadata.get_ddl ('TABLE', 'EMP')  from dual ;
 DBMS_METADATA.GET_DDL('TABLE','EMP')
--------------------------------------------------------------------------------
   CREATE TABLE "ALEX"."EMP"
   (    "EMPNO" NUMBER(4,0),
        "ENAME" VARCHAR2(10),
        "JOB" VARCHAR2(9),
        "MGR" NUMBER(4,0),
        "HIREDATE" DATE,
        "SAL" NUMBER(7,2),
        "COMM" NUMBER(7,2),
        "DEPTNO" NUMBER(2,0)
   )

One of the nice things is that we don't need to modify all the transformation parameters again to go back to the default. They made it really easy to return to the default settings:

SQL> begin
  2     dbms_metadata.set_transform_param (dbms_metadata.session_transform, 'DEFAULT');
  3  end;
  4  /
 PL/SQL procedure successfully completed.

2.) GET_DEPENDENT_DDL( ) : This function is used to fetch dependent objects (audits, object grants).
 Syntax :
DBMS_METADATA.GET_DEPENDENT_DDL (
object_type         IN VARCHAR2,
base_object_name    IN VARCHAR2,
base_object_schema  IN VARCHAR2 DEFAULT NULL,
version             IN VARCHAR2 DEFAULT 'COMPATIBLE',
model               IN VARCHAR2 DEFAULT 'ORACLE',
transform           IN VARCHAR2 DEFAULT 'DDL',
object_count        IN NUMBER   DEFAULT 10000)
RETURN CLOB ;

Example :   In this example, we will extract the reference constraints which is dependent on another tables .

SQL > select dbms_metadata.get_dependent_ddl( 'REF_CONSTRAINT', table_name) DDL
              FROM USER_TABLES WHERE table_name = 'EMPLOYEES' ;
ALTER  TABLE  "HR"."EMPLOYEES" ADD CONSTRAINT  "EMP_DEPT_FK"  FOREIGN KEY ("DEPARTMENT_ID")  REFERENCES  "HR"."DEPARTMENTS"  ("DEPARTMENT_ID") ENABLE

ALTER  TABLE "HR"."EMPLOYEES"  ADD  CONSTRAINT "EMP_JOB_FK"  FOREIGN KEY ("JOB_ID") REFERENCES  "HR"."JOBS"  ("JOB_ID") ENABLE

ALTER TABLE "HR"."EMPLOYEES" ADD CONSTRAINT "EMP_MANAGER_FK" FOREIGN KEY ("MANAGER_ID")  REFERENCES "HR"."EMPLOYEES" ("EMPLOYEE_ID") ENABLE

3.) GET_GRANTED_DDL( ) : This function  is used to fetch granted objects (system grants, role grants ) to the users of the database .

Syntax : 


DBMS_METADATA.GET_GRANTED_DDL (
object_type     IN VARCHAR2,
grantee         IN VARCHAR2 DEFAULT NULL,
version         IN VARCHAR2 DEFAULT 'COMPATIBLE',
model           IN VARCHAR2 DEFAULT 'ORACLE',
transform       IN VARCHAR2 DEFAULT 'DDL',
object_count    IN NUMBER   DEFAULT 10000)
RETURN CLOB;

Example :  Here we will extract the "system grant" assigned to the user "SYSTEM "

SQL> select dbms_metadata.get_granted_ddl('SYSTEM_GRANT','SYSTEM') from dual;

DBMS_METADATA.GET_GRANTED_DDL('SYSTEM_GRANT','SYSTEM')
--------------------------------------------------------------------------------
  GRANT GLOBAL QUERY REWRITE TO "SYSTEM"
  GRANT CREATE MATERIALIZED VIEW TO "SYSTEM"
  GRANT SELECT ANY TABLE TO "SYSTEM"
  GRANT CREATE TABLE TO "SYSTEM"
  GRANT UNLIMITED TABLESPACE TO "SYSTEM" WITH ADMIN OPTION

To extract all the grants to all the user we can use the below statements  .

SQL> SELECT DBMS_METADATA.GET_GRANTED_DDL(‘ROLE_GRANT’, USERNAME) || ‘/’ DDL FROM DBA_USERS where exists (select ‘x’ from dba_role_privs drp where drp.grantee = dba_users.username)
UNION ALL
SELECT DBMS_METADATA.GET_GRANTED_DDL(‘SYSTEM_GRANT’, USERNAME) || ‘/’ DDL FROM DBA_USERS where exists (select ‘x’ from dba_role_privs drp where drp.grantee = dba_users.username)
UNION ALL
SELECT DBMS_METADATA.GET_GRANTED_DDL(‘OBJECT_GRANT’, USERNAME) || ‘/’ DDL FROM  DBA_USERS where exists (select ‘x’ from dba_tab_privs dtp where  dtp.grantee = dba_users.username);


Enjoy   :-) 

ORA-39083: Object type INDEX_STATISTICS failed ,ORA-01403: no data found

Once while importing , the above error occurred . I have  generated the sql file of this import operation. After analsing and googling , i found that this error may occurred  because any one index  is missing, for some reason, that why  the impdp utility fails importing the statistics associated to that particular missing index. In this case, the problem is generated because expdp utility puts the CREATE INDEX statements in wrong order into the dumpfile . Below are the details .

C:\>impdp system/xxxx@orcl  DUMPFILE=SHAIK72_01.dmp LOGFILE=SHAIK72_imp.log remap_schema= SHAIK72:SHAIK72
Import: Release 11.2.0.1.0 - Production on Fri Dec 9 10:31:59 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/******** DUMPFILE=SHAIK72_01.dmp LOGFILE=SHAIK72_imp.log remap_schema=
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "SHAIK72"."AD_ARCHIVE"                      188.3 MB    1785 rows
. . imported "SHAIK72"."AD_ATTACHMENT"                   81.80 MB     240 rows
. . imported "SHAIK72"."T_REPORTSTATEMENT"               60.03 MB 1012428 rows
. . imported "SHAIK72"."AD_QUERYLOG"                     25.75 MB   98396 rows
. . imported "SHAIK72"."FACT_ACCT"                       25.82 MB  123994 rows
. . imported "SHAIK72"."T_TRIALBALANCE"                  13.90 MB   59977 rows
. . imported "SHAIK72"."AD_WF_ACTIVITY"                  18.46 MB  110882 rows
. . imported "SHAIK72"."AD_WF_EVENTAUDIT"                18.30 MB  110882 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-01403: no data found
ORA-01403: no data found
Failing sql is :
DECLARE I_N VARCHAR2(60);   I_O VARCHAR2(60);   c DBMS_METADATA.T_VAR_COLL;   df varchar2(21) := 'YYYY-MM-DD:HH24:MI:SS'; BEGIN  DELETE FROM "SYS"."IMPDP_STATS";   c(1) :=   DBMS_METADATA.GET_STAT_COLNAME('SHAIK72','C_BPARTNER','NULL ',NULL,0);  DBMS_METADATA.GET_STAT_INDNAME('SHAIK72','C_BPARTNER',c,1,i_o,i_n);   INSERT INTO "SYS"."IMPDP_STATS" (type,version,flags,c1,c2,c
Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
ORA-39082: Object type VIEW:"SHAIK72"."DSI_V_RETAILER_STRUCT" created with compilation warnings
ORA-39082: Object type VIEW:"SHAIK72"."DSI_V_SHIPMENT_DETAILS" created with compilation warnings
ORA-39082: Object type VIEW:"SHAIK72"."M_STORAGE_V" created with compilation warnings
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 6 error(s) at 10:59:21


Here are the workaroud to solve this issue .

1.) Use traditional export/import utility .

2.) First import the dump by excluding the indexes and later import the same dump by including the indexes as below :  
a.)  impdp system/xxxx DUMPFILE=SHAIK72_01.dmp LOGFILE=SHAIK72_imp.log remap_schema= SHAIK72:SHAIK72  exclude=indexes and then

b.)  impdp system/xxxx  DUMPFILE=SHAIK72_01.dmp LOGFILE=SHAIK72_imp1.log remap_schema= SHAIK72:SHAIK72  include=indexes 

3.)  On googling , i found that using parameter  EXCLUDE=STATISTICS may also solve this issue . Check the below link  for this parameter .
http://dbhk.wordpress.com/category/impdp/


Enjoy     :-) 

Wednesday, December 7, 2011

ORA-00600: internal error code, arguments: [kkpo_rcinfo_defstg:objnotfound],[1002300]

Once we have upgraded our database to fix some issue. Everything goes well . On the very next day when we are dropping the user we get the ORA-00600 internal error . The ORA-00600 error has the following arguments

SQL> drop user test cascade ;
ERROR at line 1:
ORA-00600: internal error code, arguments: [kkpo_rcinfo_defstg:objnotfound],[1002300], [], [], [], [], [], [], [], [], [], []

ORA-600  is an internals errors and occur due to various reasons. This issue has been identified as Bug:10373381. In my case this is due to upgradation from lower version to higher version . I have checked the meta-link and found the interesting workaround . The solve this error perform the below steps : 

1.)  Apply the 11.2.0.2.2 PSU Patch: 11724916 or higher :  

If on Exadata, download and apply the 11.2.0.2. Bundle Patch 6 Patch:12326685 or higher (See NOTE: 1314319.1 for latest 11.2.0.2 Exadata patches).
If on Windows, apply the 11.2.0.2 Patch 3 or higher (See NOTE:1114533.1 for latest 11.2.0.2 patch)
32-Bit Patch:11731183
64-Bit (x64) Patch:11731184

2.)  If available for our platform and version, download and apply Patch 10373381

3.)  As a workaround, restore and perform a direct upgrade from 10.2.0.5 to the 11.2.0.2 release.



Enjoy    :-) 


Saturday, December 3, 2011

Transparent Data Encryption in Oracle 11g

Oracle Transparent Data Encryption (TDE) enables the organizations to encrypt sensitive application data on storage media completely transparent to the application. TDE addresses encryption requirements associated with public and private privacy and security regulations such as PCI DSS. TDE column encryption was introduced in Oracle Database 10g Release 2, enabling encryption of table columns containing sensitive information. The TDE tablespace encryption and the support for hardware security modules (HSM) were introduced in Oracle Database 11gR1.

TDE is protecting the data at rest. It is encrypting the data in the datafiles so that in case they are obtained by other parties it will not be possible to access the clear text data.  TDE cannot be used to obfuscate the data for the users who have privileges to access the tables. In the databases where TDE is configured any user who has access on an encrypted table will be able to see the data in clear text because Oracle will transparently decrypt the data for any user having the necessary privileges.
TDE is using a two tier encryption key architecture consisting of  :

  • a master encryption key - this is the encryption key used to encrypt secondary keys used for column encryption and tablespace encryption.
  • one or more table and/or tablespace keys - these are the keys that are used to encrypt one or more specific columns or the keys used to encrypt  tablespaces. There is only one table key regardless of the number of encrypted columns in a table and it will be stored in the data dictionary. The tablespace key is stored in the header of each datafile of the encrypted tablespace. 

The table and tablespace keys are encrypted using the master key. The master key is stored in an external security module (ESM) that can be one of the following:

  • an Oracle Wallet - a secure container outside of the database. It is encrypted with a password. 
  • a Hardware Security Module (HSM) - a device used to secure keys and perform cryptographic operations. 

To start using  TDE the following operations have to be performed:

1.) Make sure that the wallet location exists. If a non default wallet location must be used then specify it in the sqlnet.ora file :

ENCRYPTION_WALLET_LOCATION =
   (SOURCE = (METHOD = FILE)
     (METHOD_DATA =
      (DIRECTORY = C:\app\neerajs\admin\orcl\wallet)
     )
   )

Note : The default encryption wallet location is $ORACLE_BASE/admin/<global_db_name>/wallet. If we want to let Oracle manage a wallet in the default location then there is no need to set the ENCRYPTION_WALLET_LOCATION parameter in sqlnet.ora.

It is important to check that the location specified in sqlnet.ora or the default location exists and can be read/written by the Oracle processes.

2.) Generate a master key :

SQL> alter system set encryption key identified by "wallet_password" ;
system altered

This command will do the following :

A.) If there is no wallet currently in the wallet location then a new wallet with the password "wallet_password" will be generated. The password is enclosed in double quotes to preserve the case of the characters. If the double quotes are not used then the characters of the password will be all in upper case. This command will also cause the new wallet to be opened and ready for use.

B.) A new master key will be generated and will be written to the wallet. This newly generated master key will become the active master key. The old master keys (if there were any) will still be kept in the wallet but they will not be active. They are kept there to be used when decrypting data that was previously encrypted using them .

To see the status of an wallet run the following query:

SQL> select * from v$encryption_wallet;
WRL_TYPE             WRL_PARAMETER                      STATUS
-----------    ------------------------------         -----------
file                C:\app\neerajs\admin\orcl\wallet         OPEN

3.)  Enable encryption for a column or for an entire tablespace:

3.1) Create a table by specifying the encrypt option:

SQL> create table test(col1 number, col2 varchar2(100) encrypt using 'AES256' NO SALT) ;

3.2) Encrypt the column(s) of an existing table :

SQL> alter  table  test  modify( col2 encrypt SALT ) ;

Note : If the table has many rows then this operation might take some time since all the values stored in col2 must be replaced by encrypted strings. If the access to the table during this operations is needed then useOnline Table Redefinition

3.3)  Create an encrypted tablespace : The syntax is the same as creating a normal tablespace except for two clauses:
  • We specify the encryption algorithm – in this case ‘AES256′. If we do not specify this, it will default to ‘AES128′. At the time of tablespace creation specify the encryption and default storage clause.

Define the encryption algorithem as " using 'algorithm' " along with the encryption clause. We can use the following algorithms while creating an encrypted tablespace.
AES128
AES192
AED256
3DES168
If we don't specify any algorithm with the encryption clause it will use AES128 as default.

  •  The DEFAULT STORAGE (ENCRYPT) clause.
SQL> create tablespace encryptedtbs  datafile 'C:\app\neerajs\oradata\orcl\encryptedtbs01.dbf'  size 100M encryption using  'AES256'  default storage(encrypt) ;

Note: An existing  non encrypted tablespace cannot be encrypted. If we must encrypt the data from an entire tablespace then create a new encrypted tablespace and then move the data from the old tablespace to the new one TDE Master Key and Wallet Management .

The wallet is a critical component and should be backed up in a secure location (different to the location where the database backups are stored!). If the wallet containing the master keys is lost or if its password is forgotten then the encrypted data will not be accessible anymore.  Make sure that the wallet is backed up in the following scenarios: 

Immediately after creating it.
1. When regenerating the master key
2. When backing up the database. Make sure that the wallet backup is not stored in the same location with the database backup
3. Before changing the wallet password

Make sure that the wallet password is complex but at the same time easy to remember. When it is possible split knowledge about wallet password .If needed, the wallet password can be changed within Oracle Wallet Manager or with the following command using orapki  (starting from 11.1.0.7):

c:\> orapki wallet change_pwd -wallet <wallet_location>

Oracle recommends that the wallet files are placed outside of the $ORACLE_BASE directory to avoid having them backed up to same location as other Oracle files. Furthermore it is recommended to restrict the access to the directory and to the wallet files to avoid accidental removals. 

we can identify encrypted tablespaces in the database by using the below query :
SQL>SELECT ts.name, es.encryptedts, es.encryptionalg FROM v$tablespace ts
INNER JOIN v$encrypted_tablespaces es  ON es.ts# = ts.ts# ; 

The following are supported with encrypted tablespaces
  • Move table back and forth between encrypted tablespace and non-encrypted tablespace .
  • Datapump is supported to export/import encrypted content/tablespaces. 
  • Transportable tablespace is supported using datapump. 
The following are not supported with encrypted tablespaces
  • Tablespace encryption cannot be used for SYSTEM, SYSAUX, UNDO and TEMP tablespaces .
  • Existing tablespace cannot be encrypted . 
  • Traditional export/import utilities for encrypted content.
To check the example  of the TDE click here

Enjoy     :-)