Friday, April 15, 2011

How To Estimate Disk Space Needed for an Export Job ?

If we  just  want to  know how  much  disk space  for  the  dump  without  actually   exporting  any data, we  can use the ESTIMATE_ONLY=y parameter on  the  expdp  command. ESTIMATE_ONLY=y  parameter  estimate  the space in bytes per tables. The below demo will show that how a system user wants to see the disk space estimates on  hr and scott schemas export . 

C:\>expdp system/ramtech@noida directory=dpump  schemas=scott,hr estimate_only=y
Export: Release 11.1.0.6.0 - Production on Friday, 15 April, 2011 18:56:03
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  system/********@noida directory=dpump schemas=scott,hr logfile=scott_hr.log estimate_only=y
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
 estimated "HR"."COUNTRIES"                               64 KB
 estimated "HR"."DEPARTMENTS"                             64 KB
.  estimated "HR"."EMPLOYEES"                               64 KB
 estimated "HR"."JOBS"                                    64 KB
 estimated "HR"."JOB_HISTORY"                             64 KB
 estimated "HR"."LOCATIONS"                               64 KB
 estimated "HR"."REGIONS"                                 64 KB
 estimated "SCOTT"."DEPT"                                 64 KB
 estimated "SCOTT"."EMP"                                  64 KB
 estimated "SCOTT"."SALGRADE"                             64 KB
 estimated "SCOTT"."BONUS"                                 0 KB
Total estimation using BLOCKS method: 640 KB
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 18:57:38

Now we see that we need 640 KB disk space to export the entire data base.

Note : 

  • The ESTIMATE_ONLY parameter cannot be used in conjunction with the QUERY parameter dumpfile.
  • Estimate Parameter  do not give the accurate size . It gives the approax.  size of the dumpfile . 
  • If the Data Pump export job involves compressed tables, the default size estimation given for the compressed table is inaccurate when ESTIMATE=BLOCKS is used. This is because the size estimate does not reflect that the data was stored in a compressed form. To get a more accurate size estimate for compressed tables, use ESTIMATE=STATISTICS.
  • The estimate may also be inaccurate if the QUERY, SAMPLE, or REMAP_DATA parameter is used.


Enjoy    :-) 



Data Pump Encryption in Oracle

Encryption parameters is a  new feature introduced in oracle 10g data-pump . The only parameter used in 10g is encryption password  . In Oracle 11g , three more encryption parameter was introduced in data-pump .The three parameter are  encryption , encryption_algorithm  and  encryption_mode .

To enable encryption, either the ENCRYPTION or ENCRYPTION_PASSWORD parameter, or both, must be specified. If only the ENCRYPTION_PASSWORD parameter is specified, then the ENCRYPTION parameter defaults to ALL. If neither ENCRYPTION nor ENCRYPTION_PASSWORD is specified, then ENCRYPTION defaults to NONE. Data pump encryption is specified by the encryption parameter, the algorithm of the encryption and the mode of the encryption

Here is a demo which includes all the encryption command.

C:\>expdp system/ramtech@noida directory=dpump  dumpfile=hr_schema.dmp logfile=exp_hrlog.log  schemas=hr encryption=all  encryption_password=india encryption_algorithm=AES256 ENCRYPTION_MODE=password
Export: Release 11.1.0.6.0 - Production on Friday, 15 April, 2011 17:18:32
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  system/********@noida directory=dpump dumpfile=hr_schema.dmp logfile=exp_hrlog.log schemas=hr encryption=all encryption_password=******** encryption_algorithm=AES256 ENCRYPTION_MODE=password
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 448 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
. . exported "HR"."COUNTRIES"                            6.382 KB      25 rows
. . exported "HR"."DEPARTMENTS"                          7.023 KB      27 rows
. . exported "HR"."EMPLOYEES"                            16.81 KB     107 rows
. . exported "HR"."JOBS"                                 6.992 KB      19 rows
. . exported "HR"."JOB_HISTORY"                          7.062 KB      10 rows
. . exported "HR"."LOCATIONS"                            8.281 KB      23 rows
. . exported "HR"."REGIONS"                              5.492 KB       4 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
  D:\DPUMP\HR_SCHEMA.DMP
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 17:21:29

Now we will import the above dump file without specifying the password and check the error.

C:\>impdp system/ramtech@noida directory=dpump dumpfile= hr_schema.dmp logfile=imp_hr.log remap_schema=hr:harry
Import: Release 11.1.0.6.0 - Production on Friday, 15 April, 2011 17:30:17
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39174: Encryption password must be supplied.

Now  we use encryption_password command to successfully completed the above task

C:\>impdp system/ramtech@noida directory=dpump dumpfile= hr_schema.dmp logfile=imp_hr1.log remap_schema=scott:harry  encryption_password=india 

Import: Release 11.1.0.6.0 - Production on Friday, 15 April, 2011 17:36:33
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@noida directory=dpump dumpfile= hr_schema.dmp logfile=imp_hr1.log remap_schema=hr:harry encryption_password=********
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"HARRY" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "HARRY"."COUNTRIES"                         6.382 KB      25 rows
. . imported "HARRY"."DEPARTMENTS"                       7.023 KB      27 rows
. . imported "HARRY"."EMPLOYEES"                         16.81 KB     107 rows
. . imported "HARRY"."JOBS"                              6.992 KB      19 rows
. . imported "HARRY"."JOB_HISTORY"                       7.062 KB      10 rows
. . imported "HARRY"."LOCATIONS"                         8.281 KB      23 rows
. . imported "HARRY"."REGIONS"                           5.492 KB       4 rows
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 17:37:12



Enjoy      J J J



Thursday, April 14, 2011

Tracking Oracle database growth

Keeping track of database space growth in production environments is a must-to-do job of every DBA. Maintaining and reporting information of historical database growth is always a task that involves good amount of planning and scheduling various jobs in the database.

Oracle provides time-series segment statistics for tables and index in the 10g Automated  workload Repository tables (AWR).  These can quickly extract data for Oracle growth reports. In Oracle 10g, total database  growth  reports  are  is  easy because the dba_hist_seg_stat tables  provides   the delta value for every table and index. Starting from 10g, Oracle has provided us with a feature to achieve this without much overhead. Most importantly, we can use this with out incurring any extra license cost. This technique is used by many DBA’s but not much documentation is available over the internet for easy use.


This script is helpful in tracking  the oracle growth


SQLselect b.tsname tablespace_name , MAX(b.used_size_mb) cur_used_size_mb , round(AVG(inc_used_size_mb),2)avg_increas_mb  from  ( SELECT a.days,a.tsname , used_size_mb , used_size_mb - LAG (used_size_mb,1) OVER ( PARTITION BY a.tsname ORDER BY a.tsname,a.days) inc_used_size_mb
from ( SELECT TO_CHAR(sp.begin_interval_time,'MM-DD-YYYY') days  ,ts.tsname ,MAX(round((tsu.tablespace_usedsize* dt.block_size )/(1024*1024),2)) used_size_mb
from dba_hist_tbspc_space_usage  tsu , dba_hist_tablespace_stat  ts ,dba_hist_snapshot  sp, dba_tablespaces  dt   where tsu.tablespace_id= ts.ts# AND tsu.snap_id = sp.snap_id
AND ts.tsname = dt.tablespace_name AND sp.begin_interval_time > sysdate-7
GROUP BY TO_CHAR(sp.begin_interval_time,'MM-DD-YYYY'), ts.tsname 
ORDER BY ts.tsname, days ) a
) b GROUP BY b.tsname ORDER BY b.tsname;
 Output : 

TABLESPACE_NAME                CUR_USED_SIZE_MB           AVG_INCREAS_MB
--------------------             -----------------------      ------------------------
ENOM_CM_DATA                          1121.31                            78.71
ENOM_CM_INDEX                         2.38                                  0
ENOM_EL_CATALOG                      4.13                                 .01
ENOM_EL_DATA                           13.63                                0
ENOM_EL_INDEX                          16.06                               .01
ENOM_GS_CATALOG                     1                                      0
ENOM_GS_DATA                          1                                      0
ENOM_LA_DATA                          5                                      0
ENOM_LA_INDEX                         3                                      0
EXAMPLE                                   78.44                                0
SYSAUX                                    614.75                              14.46
SYSTEM                                    692.88                              1.32
UNDOTBS1                                48                                    -.96
USERS                                     644.38                              91.47
14 rows selected.


Enjoy     :-)


Extract SQL Statement while Import (Data pump )

Sqlfile  parameter of data pump specifies a file into which all of the SQL DDL that Import  would have executed, based    on other parameters .The   SQL is not  actually executed, and   the  target system   remains unchanged .Sqlfile   command is   only use in datapump import not in datapump  export. This   help   to investigate the schemas object and structure. Any existing file that has a name matching the one      specified with this parameter is overwritten.For Streams and other Oracle database options, anonymous PL/SQL blocks may appear within the SQLFILE output. They should not be executed directly.

syntax :    SQLFILE=[directory_object:]file_name

First of all we take the export of schemas and then we will import the dumpfile into another schema by using remap_schema command and extract the sql statements.Let's have a look on sqlfile parameter .

SQL> create directory DATAPUMP as 'd:\dpump\';
Directory created.
SQL> grant read,write on directory datapump to system;
Grant succeeded.
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

C:\>expdp system/ramtech@noida directory=datapump dumpfile=scott_sql.dmp logfile=sql_log.log schemas=scott
Export: Release 11.1.0.6.0 - Production on Thursday, 14 April, 2011 17:16:49
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  system/********@noida directory=datapump dumpfile=scott_sql.dmp logfile=sql_log.log schemas=scott
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 384 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/DB_LINK
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
. . exported "SCOTT"."AAAA"                              5.039 KB       5 rows
. . exported "SCOTT"."DEPT"                              5.937 KB       4 rows
. . exported "SCOTT"."EMP"                               8.570 KB      14 rows
. . exported "SCOTT"."SALGRADE"                          5.867 KB       5 rows
. . exported "SCOTT"."TEST"                              5.023 KB       3 rows
. . exported "SCOTT"."TEST1"                             5.031 KB       3 rows
. . exported "SCOTT"."BONUS"                                 0 KB       0 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
  D:\DPUMP\SCOTT_SQL.DMP
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 17:20:41

Now we will import the above dumpfile in another schemas(harry).

C:\>impdp system/ramtech@noida directory=datapump dumpfile=scott_sql.dmp logfile=impsql_log.log sqlfile=scott_sqlfile.sql   remap_schema=scott:harry
Import: Release 11.1.0.6.0 - Production on Thursday, 14 April, 2011 17:26:25
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_SQL_FILE_FULL_01":  system/********@noida directory=datapump dumpfile=scott_sql.dmp logfile=impsql_log.log sqlfile=scott_sqlfile.sql remap_schema=scott:harry
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/DB_LINK
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
Job "SYSTEM"."SYS_SQL_FILE_FULL_01" successfully completed at 17:27:17

Note:  Passwords are not included in the SQL file.           


 Enjoy      :-)


Wednesday, April 13, 2011

Delete or Cleanup Orphaned Data Pump Jobs

Sometimes we need stop the datapump operation due to some reason and continue it later (as describe in previous post) .  To cleanup orphaned datapump jobs we perform the following steps.

1.)  Check the orphaned datapump jobs.

SQL>select owner_name,job_name,operation,job_mode,state,attached_sessions from dba_datapump_jobs; 
OWNER_NAME                     JOB_NAME                       OPERATION                      JOB_MODE                       STATE                          ATTACHED_SESSIONS
------------------------------ ------------------------------ ------------------------------ ------------------------------ ------------------------------ -----------------
SYSTEM                         SYS_EXPORT_SCHEMA_01           EXPORT                         SCHEMA                 NOT RUNNING                                    0
SYSTEM                         SYS_EXPORT_SCHEMA_03           EXPORT                         SCHEMA                 NOT RUNNING                                    0
SYSTEM                         SYS_EXPORT_SCHEMA_02           EXPORT                         SCHEMA                 NOT RUNNING                                    0

2.)  Check the status of  "state"  field
For orphaned jobs the state will be NOT RUNNING. So from the output we can say all the three are orphaned jobs. Orphaned Data Pump jobs do not have an impact on new Data Pump jobs. If a new Data Pump job is started, a new entry will be created, which has no relation to the old Data Pump jobs.

3.)  Drop the master table  
Since  the  above  jobs  are  stopped  or  not running  won't  be  restarted  anymore,  so  drop  the master table. The master  tables  above  are  SYS_EXPORT_SCHEMA_01,   SYS_EXPORT_SCHEMA_03, SYS_EXPORT_SCHEMA_02) .
SQL> drop table  system.SYS_EXPORT_SCHEMA_03 ; 
Table dropped.
SQL> drop table  system.SYS_EXPORT_SCHEMA_01 ; 
Table dropped.
SQL> drop table  system.SYS_EXPORT_SCHEMA_02 ; 
Table dropped.


4.) Check  for  existing  data  pump  jobs 
Now check the existing datapump job by  query  issued  in  step 1.  If  objects  are  in  recyclebin  then purge the objects from the recyclebin. 
SQL> SELECT owner_name, job_name, operation, job_mode, state, attached_sessions from dba_datapump_jobs;
No row selected
SQL> purge table system.SYS_EXPORT_SCHEMA_01;
Table purged.
SQL> purge table system.SYS_EXPORT_SCHEMA_02;
Table purged
SQL> purge table system.SYS_EXPORT_SCHEMA_03;
Table purged

SQL>select owner_name, job_name, operation, job_mode, state, attached_sessions from dba_datapump_jobs; 
 no rows selected

Note : 

1.)  Orphaned  Data Pump jobs do  not have  an impact on  new Data  Pump jobs. The view  dba_datapump_jobs is a  view, based on  gv$datapump_job,  obj$, com$,  and  user$. The view  shows  the  Data  Pump  jobs  that are   still  running,  or  jobs  for   which  the  master  table  was  kept  in  the  database, or in  case of  an  abnormal  end  of  the  Data  Pump  job (the orphaned job).  If a  new Data  Pump  job  is  started,  a  new  entry will  be  created,  which  has  no  relation  to  the  old Data  Pump  jobs.

2.)  When starting the new Data Pump job and using a system generated name, we check the names of existing Data Pump jobs in the dba_datapump_job in order to obtain a unique new system generated jobname. Naturally, there needs to be enough free space for the new master table to be created in the schema that started the new Data Pump job.

3.) A Data  Pump job is  not the same as  a job that is defined  with  DBMS_JOBS. Jobs created with DBMS_JOBS use there own processes. Data  Pump jobs  use a  master process  and  worker  process(es). In case a   Data Pump   still is temporary  stopped  (STOP_JOB while   in interactive   command mode),  the  Data Pump  job  still exists  in  the  database   (status: NOT RUNNING),  while the  master  and worker  process(es)  are stopped  and  do  not  exist  anymore. The  client  can  attach  to  the  job  at a later time, and continue the job execution (START_JOB).

4.) The  possibility  of corruption when the master table of an active Data Pump job is deleted, depends on the Data Pump job. If the job is an export job, corruption is unlikely as the drop of the master table will only cause the Data Pump master and worker processes to abort. This situation is similar to aborting an export of the original export client.



Enjoy     :-)