Monday, April 18, 2011

Create Control file Manually. When and How ?

The control files of a database store the status of the physical structure of the database. The control file is absolutely crucial to database operation .
Control File contains

> Database information (RESETLOGS SCN and their time stamp)
> Archive log history
> Tablespace and datafile records
(filenames, datafile checkpoints, read/write status, offline or not)
> Redo Logs (current online redo log)
> Database’s creation date
> database name
> current archive log mode
> Log records (sequence numbers, SCN range in each log)
> RMAN catalog
> Database block corruption information
> Database ID, which is unique to each DB

If the controlfile is lost, it is somewhat difficult to do a recovery because the database cannot be mounted for a recovery. The controlfile must be recreated. So We can Manually create a new control file for a database using the CREATE CONTROLFILE statement. The following statement creates a new control file for the  database (a database that formerly used a different database name) .

When to Create New Control Files :
It is necessary for us to create new control files in the following situations:

1.) All control files for the database have been permanently damaged and we do not have a control file backup.
2.) We want to change the database name. For example, we would change a database name if it conflicted with another database name in a distributed environment.
3.) The compatibility level is set to a value that is earlier than 10g, and we must make a change to an area of database configuration that relates to any of the following parameters from the CREATE DATABASE or CREATE CONTROLFILE commands: MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, and MAXINSTANCES. If compatibility is 10g or later, we do not have to create new control files when we make such a change; the control files automatically expand, if necessary, to accommodate the new configuration information.

For example, assume that when we created the database or recreated the control files, we set MAXLOGFILES to 3. Suppose that now we want to add a fourth redo log file group to the database with the ALTER DATABASE command. If compatibility is set to 10g or later, we can do so and the controlfiles automatically expand to accommodate the new logfile information. However, with compatibility set earlier than 10g, our ALTER DATABASE command would generate an error, and we would have to first create new control files .

Command to Create Controlfile Manually 

C:\>sqlplus sys/ramtech@noida as sysdba
SQL*Plus: Release 11.1.0.6.0 - Production on Mon Apr 18 17:31:50 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> STARTUP NOMOUNT
SQL> CREATE CONTROLFILE REUSE DATABASE  "NOIDA"
NORESETLOGS archivelog
MAXLOGFILES 5 
MAXLOGMEMBERS 3 
MAXDATAFILES 10 
MAXINSTANCES 1 
MAXLOGHISTORY 113
LOGFILE 
GROUP 1 'D:\oracle\oradata\noida\REDO01.LOG' SIZE 50M,
GROUP 2 'D:\oracle\oradata\noida\REDO02.LOG' SIZE 50M,
GROUP 3 'D:\oracle\oradata\noida\REDO03.LOG' SIZE 50M
DATAFILE 
'D:\oracle\oradata\noida\SYSTEM01.DBF' , 
'D:\oracle\oradata\noida\USERS01.DBF' , 
'D:\oracle\oradata\noida\EXAMPLE01.DBF' , 
'D:\oracle\oradata\noida\SYSAUX01.DBF' ,
'D:\oracle\oradata\noida\TRANS.DBF' ,
'D:\oracle\oradata\noida\UNDOTBS01.DBF' ;

Specify RESETLOGS if we want Oracle to ignore the contents of the files listed in the LOGFILE clause. The log files do not have to exist but each redo_log_file_spec in the LOGFILE clause must specify the SIZE parameter. Oracle will assign all online redo log file groups to thread 1 and will enable this thread for public use by any instance. We must then open the database using ALTER DATABASE RESETLOGS.

NORESETLOGS will use all files in the LOGFILE clause as they were when the database was last open. These files must exist and must be the current online redo log files rather than restored backups.Oracle will reassign the redo log file groups to re-enabled threads as previously assigned.


Enjoy       J J J 


How to Determine the Name of the Trace File to be Generated


In many cases we need to find out the name of the latest trace file generated in the USER_DUMP_DEST directory. What we usually do is, that we physically go to the USER_DUMP_DEST location with the operating system browser and sort all the files by date and look for latest files. We can remove this hassle easily if we know what would be the trace file name in advance. Let's have a look ...

Demo : 1 
C:\>sqlplus sys/xxxx@noida as sysdba
SQL*Plus: Release 11.1.0.6.0 - Production on Mon Apr 18 17:44:49 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> alter database backup controlfile to trace;
Database altered.

The above Command will generate the trace file inside USER_DUMP_DEST. Let's check the location of USER_DUMP_DESTIf we are using Sql*plus then issue,

SQL> show parameter user_dump_dest
NAME                         TYPE           VALUE
--------------                --------     -----------------------------------------------------
user_dump_dest        string       d:\oracle\diag\rdbms\noida\noida\trace

Here the latest files are for latest trace . Sometimes, we may not get the right trace file .Now it would be quite easy task if we knew the name of the trace file to be generated by ALTER DATABASE command. In advance we can get the trace file name as 

SQL> SELECT s.sid, s.serial#, pa.value || '\' || LOWER(SYS_CONTEXT('userenv','instance_name')) ||
          '_ora_' || p.spid || '.trc'  AS trace_file        FROM   v$session s,  v$process p, v$parameter pa
   WHERE  pa.name = 'user_dump_dest'     AND    s.paddr = p.addr
   AND    s.audsid = SYS_CONTEXT('USERENV', 'SESSIONID');
   SID            SERIAL#                     TRACE_FILE
---------        ----------           -----------------------------------------------------------------------
 110             312                  d:\oracle\diag\rdbms\noida\noida\trace\noida_ora_3552.trc
  
the trace file to be generated now will be named as noida_ora_3552.trc . So now issuing, "alter database backup controlfile to trace" will generate the file named d:\oracle\diag\rdbms\noida\noida\trace\noida_ora_3552.trc

Demo : 2 
This method is much simple and easy to identify the trace file. Let's have a look on another demo .

C:\>sqlplus sys/xxxx@noida as sysdba
SQL*Plus: Release 11.1.0.6.0 - Production on Mon Apr 18 17:49:49 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> show   parameter    user_dump_dest
NAME                             TYPE                    VALUE
---------------------           --------       ----------------------------------------
user_dump_dest          string           d:\oracle\diag\rdbms\noida\noida\trace

SQL> alter session set tracefile_identifier='mytracefile' ;
Session altered.
SQL>  alter database backup controlfile to trace; 
Session altered.


Now, go to the user_dump_dest location and find the trace file having name "mytracefile" . In mycase the name is   "noida_ora_3552_mytracefile.trc"

The difference between the two demo is that first demo is on system level so it will give all the trace file generated by different session whereas in second case , it will show  the trace file for particular session only . The another difference between is that in first demo we have to fire the command and then check the tracefile but in second demo we have to set the trace file name so that we can easily identify the correct trace file .


Enjoy      :-)


Saturday, April 16, 2011

How To Determine the DBID ?

The DBID is a unique identifier. It is found in all datafile headers. The DBID is used to identifiy the database a file belongs to.  There may be situations where we need the recovery of the spfile or control file from autobackup, such as disaster recovery when we have lost all database files , then in such case  we will need  to determine the DBID to restore the database . If we do not have a record of the DBID of database, there are two places from where we can easily find it.

1.) The DBID is used in forming the filename for the control file autobackup. Below is the name of my autobackup controlfile name    ==  C-1502483083-20110416-00

Here 1502483083 specifies  the DBID   and 20110416  specifies the date i.e, 16th april 2011

2.) If we have any text files that preserve the output from an RMAN session, the DBID is displayed by the RMAN client when it starts up and connects to your database. Typical output follows:


C:\>rman target sys/xxxx@noida
Recovery Manager: Release 11.1.0.6.0 - Production on Sat Apr 16 17:23:28 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
connected to target database: NOIDA (DBID=1502483083)


Enjoy      J J J


Flashback_Scn and flashback_Time parameter of Data Pump

Flashback_Scn  and  flashback_Time  are  two  important  feature  of  the  datapump 11g . If  we  want  to  run  a  large  export  whilst  the  database  is  in  use  then  ideally  we  should  always use  one  of  the  two  flashback  parameters. The export  operation  is  performed  with  data  that is  consistent  as  of  the  specified  SCN .  FLASHBACK_SCN and FLASHBACK_TIME are mutually exclusive .

FLASHBACK_TIME : The SCN that most closely matches the specified time is found, and this SCN is used to enable the Flashback utility. The export operation is performed with data that is consistent as of this SCN. The FLASHBACK_SCN parameter pertains only to the Flashback Query capability of Oracle Database. It is not applicable to Flashback Database, Flashback Drop, or Flashback Data Archive. We can get the scn number from the following query :

SQL> select current_scn from v$database ;       or
SQL>select dbms_flashback.get_system_change_number from dual ; 

Let's have a Demo of the flashback_scn

SQL> select current_scn from v$database;
CURRENT_SCN
-----------
    1140271

SQL> create table hr.test as select * from test;
Table created.
SQL> select current_scn from v$database;
CURRENT_SCN
-----------
    1140487
Let's take a export using flashback_scn  parameter

C:\>expdp system/ramtech@noida directory=dpump schemas=hr dumpfile=flashback_hr.dmp logfile=flashlog.log       flashback_scn=1140271
Export: Release 11.1.0.6.0 - Production on Saturday, 16 April, 2011 11:35:45
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  system/********@noida directory=dpump schemas=hr dumpfile=flashback_hr.dmp logfile=flashlog.log    flashback_scn=1140271
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 512 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
. . exported "HR"."COUNTRIES"                            6.375 KB      25 rows
. . exported "HR"."DEPARTMENTS"                          7.015 KB      27 rows
. . exported "HR"."EMPLOYEES"                            16.80 KB     107 rows
. . exported "HR"."JOBS"                                 6.984 KB      19 rows
. . exported "HR"."JOB_HISTORY"                          7.054 KB      10 rows
. . exported "HR"."LOCATIONS"                            8.273 KB      23 rows
. . exported "HR"."REGIONS"                              5.484 KB       4 rows
ORA-31693: Table data object "HR"."TEST" failed to load/unload and is being skipped due to error:
ORA-02354: error in exporting/importing data
ORA-01466: unable to read data - table definition has changed
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
  D:\DPUMP\FLASHBACK_HR.DMP
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" completed with 1 error(s) at 11:37:50

The above error show that the table "test"  is not include in the  export operation because the SCN mention  is of before the table "test" creation. The below export will show the export upto current SCN when database is in use.

C:\>expdp system/ramtech@noida directory=dpump schemas=hr dumpfile=flashback_hr1.dmp  logfile=flashback_log.log
Export: Release 11.1.0.6.0 - Production on Saturday, 16 April, 2011 11:44:50
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  system/********@noida directory=dpump schemas=hr dumpfile=flashback_hr1.dmp logfile=flashback_log.log
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 512 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
. . exported "HR"."COUNTRIES"                                6.375 KB      25 rows
. . exported "HR"."DEPARTMENTS"                          7.015 KB      27 rows
. . exported "HR"."EMPLOYEES"                             16.80 KB     107 rows
. . exported "HR"."JOBS"                                             6.984 KB      19 rows
. . exported "HR"."JOB_HISTORY"                             7.054 KB      10 rows
. . exported "HR"."LOCATIONS"                                8.273 KB      23 rows
. . exported "HR"."REGIONS"                                      5.484 KB       4 rows
. . exported "HR"."TEST"                                              5.054 KB       8 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
  D:\DPUMP\FLASHBACK_HR1.DMP
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 11:46:41


Enjoy     :-)



Friday, April 15, 2011

Estimate Parameter in Data Pump Export

ESTIMATE  parameter  of data pump  export specify the method that Export will use to estimate how much disk space  each  table in the export  job will  consume (in bytes)  before  performing actual data pump export operation .
The ESTIMATE parameter can take two parameters. Either BLOCKS (default) or STATISTICSThe meaning of these two parameter values are specified below.

BLOCKS : The estimate is calculated by multiplying the number of database blocks used by the target objects with the appropriate block sizes.
STATISTICS : The estimate is calculated using statistics for each table. So to be accurate you must analyze table recently.

Note that the outcome specified by ESTIMATE=BLOCKS is far away from the size of the actual dumpfile. In fact, ESTIMATE=BLOCKS method generates more inaccurate result from dump file size when,

a) The table was created with a much bigger initial extent size than was needed for the actual table data.
b) Many rows have been deleted from the table, or a very small percentage of each block is used.
The outcome generated by ESTIMATE=STATISTICS is most accurate to dump file size if recently table is analyzed.

Below is an example shown both in case of ESTIMATE=STATISTICS and ESTIMATE=BLOCKS. In both cases data pump export dump file is generated after estimation of dump file.

C:\>expdp system/ramtech@noida directory=dpump  schemas=hr logfile=hrlog11.log dumpfile=hr.dmp ESTIMATE=BLOCKS
Export: Release 11.1.0.6.0 - Production on Friday, 15 April, 2011 19:26:02
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  system/********@noida directory=dpump schemas=hr logfile=hrlog11.log ESTIMATE=BLOCKS
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
.  estimated "HR"."COUNTRIES"                                64 KB
.  estimated "HR"."DEPARTMENTS"                          64 KB
.  estimated "HR"."EMPLOYEES"                               64 KB
.  estimated "HR"."JOBS"                                             64 KB
.  estimated "HR"."JOB_HISTORY"                             64 KB
.  estimated "HR"."LOCATIONS"                                64 KB
.  estimated "HR"."REGIONS"                                     64 KB
Total estimation using BLOCKS method: 448 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
. . exported "HR"."COUNTRIES"                                6.375 KB      25 rows
. . exported "HR"."DEPARTMENTS"                          7.015 KB      27 rows
. . exported "HR"."EMPLOYEES"                              16.80 KB     107 rows
. . exported "HR"."JOBS"                                             6.984 KB      19 rows
. . exported "HR"."JOB_HISTORY"                            7.054 KB      10 rows
. . exported "HR"."LOCATIONS"                               8.273 KB      23 rows
. . exported "HR"."REGIONS"                                     5.484 KB       4 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
  D:\DPUMP\EXPDAT.DMP
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 19:28:16

Now we will export the schemas by using ESTIMATE=STATISTICS

C:\>expdp system/ramtech@noida directory=dpump  schemas=hr dumpfile=hr1.dmp logfile=hrlog111.log ESTIMATE=STATISTICS
 Export: Release 11.1.0.6.0 - Production on Friday, 15 April, 2011 19:31:15
 Copyright (c) 2003, 2007, Oracle.  All rights reserved.
 Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  system/********@noida directory=dpump schemas=hr dumpfile=hr1.dmp logfile=hrlog111.log ESTIMATE=STATISTICS
Estimate in progress using STATISTICS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
.  estimated "HR"."EMPLOYEES"                           15.91 KB
.  estimated "HR"."LOCATIONS"                             8.034 KB
.  estimated "HR"."JOB_HISTORY"                          6.861 KB
.  estimated "HR"."JOBS"                                           6.795 KB
.  estimated "HR"."DEPARTMENTS"                        6.710 KB
.  estimated "HR"."COUNTRIES"                              6.150 KB
.  estimated "HR"."REGIONS"                                   5.488 KB
Total estimation using STATISTICS method: 55.95 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
. . exported "HR"."EMPLOYEES"                            16.80 KB     107 rows
. . exported "HR"."LOCATIONS"                            8.273 KB      23 rows
. . exported "HR"."JOB_HISTORY"                          7.054 KB      10 rows
. . exported "HR"."JOBS"                                 6.984 KB      19 rows
. . exported "HR"."DEPARTMENTS"                          7.015 KB      27 rows
. . exported "HR"."COUNTRIES"                            6.375 KB      25 rows
. . exported "HR"."REGIONS"                              5.484 KB       4 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
  D:\DPUMP\HR1.DMP
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 19:33:24

Using ESTIMATE=BLOCKS, before data pump export size is shown as 448 KB and using ESTIMATE=STATISTICS, before data pump export size is shown as 55.95 KB and my actual dump file size was 408KB which is away from estimation using ESTIMATE=BLOCKS as difference is 448-408=40 KB. In later case difference is 448-55.95=392.05KB

Note that if a table involves LOBs, the dump file size may vary as ESTIMATE does not take LOB size into consideration.

Enjoy      J J J