Thursday, April 14, 2011

Extract SQL Statement while Import (Data pump )

Sqlfile  parameter of data pump specifies a file into which all of the SQL DDL that Import  would have executed, based    on other parameters .The   SQL is not  actually executed, and   the  target system   remains unchanged .Sqlfile   command is   only use in datapump import not in datapump  export. This   help   to investigate the schemas object and structure. Any existing file that has a name matching the one      specified with this parameter is overwritten.For Streams and other Oracle database options, anonymous PL/SQL blocks may appear within the SQLFILE output. They should not be executed directly.

syntax :    SQLFILE=[directory_object:]file_name

First of all we take the export of schemas and then we will import the dumpfile into another schema by using remap_schema command and extract the sql statements.Let's have a look on sqlfile parameter .

SQL> create directory DATAPUMP as 'd:\dpump\';
Directory created.
SQL> grant read,write on directory datapump to system;
Grant succeeded.
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

C:\>expdp system/ramtech@noida directory=datapump dumpfile=scott_sql.dmp logfile=sql_log.log schemas=scott
Export: Release 11.1.0.6.0 - Production on Thursday, 14 April, 2011 17:16:49
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  system/********@noida directory=datapump dumpfile=scott_sql.dmp logfile=sql_log.log schemas=scott
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 384 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/DB_LINK
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
. . exported "SCOTT"."AAAA"                              5.039 KB       5 rows
. . exported "SCOTT"."DEPT"                              5.937 KB       4 rows
. . exported "SCOTT"."EMP"                               8.570 KB      14 rows
. . exported "SCOTT"."SALGRADE"                          5.867 KB       5 rows
. . exported "SCOTT"."TEST"                              5.023 KB       3 rows
. . exported "SCOTT"."TEST1"                             5.031 KB       3 rows
. . exported "SCOTT"."BONUS"                                 0 KB       0 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
  D:\DPUMP\SCOTT_SQL.DMP
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 17:20:41

Now we will import the above dumpfile in another schemas(harry).

C:\>impdp system/ramtech@noida directory=datapump dumpfile=scott_sql.dmp logfile=impsql_log.log sqlfile=scott_sqlfile.sql   remap_schema=scott:harry
Import: Release 11.1.0.6.0 - Production on Thursday, 14 April, 2011 17:26:25
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_SQL_FILE_FULL_01":  system/********@noida directory=datapump dumpfile=scott_sql.dmp logfile=impsql_log.log sqlfile=scott_sqlfile.sql remap_schema=scott:harry
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/DB_LINK
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
Job "SYSTEM"."SYS_SQL_FILE_FULL_01" successfully completed at 17:27:17

Note:  Passwords are not included in the SQL file.           


 Enjoy      :-)


Wednesday, April 13, 2011

Delete or Cleanup Orphaned Data Pump Jobs

Sometimes we need stop the datapump operation due to some reason and continue it later (as describe in previous post) .  To cleanup orphaned datapump jobs we perform the following steps.

1.)  Check the orphaned datapump jobs.

SQL>select owner_name,job_name,operation,job_mode,state,attached_sessions from dba_datapump_jobs; 
OWNER_NAME                     JOB_NAME                       OPERATION                      JOB_MODE                       STATE                          ATTACHED_SESSIONS
------------------------------ ------------------------------ ------------------------------ ------------------------------ ------------------------------ -----------------
SYSTEM                         SYS_EXPORT_SCHEMA_01           EXPORT                         SCHEMA                 NOT RUNNING                                    0
SYSTEM                         SYS_EXPORT_SCHEMA_03           EXPORT                         SCHEMA                 NOT RUNNING                                    0
SYSTEM                         SYS_EXPORT_SCHEMA_02           EXPORT                         SCHEMA                 NOT RUNNING                                    0

2.)  Check the status of  "state"  field
For orphaned jobs the state will be NOT RUNNING. So from the output we can say all the three are orphaned jobs. Orphaned Data Pump jobs do not have an impact on new Data Pump jobs. If a new Data Pump job is started, a new entry will be created, which has no relation to the old Data Pump jobs.

3.)  Drop the master table  
Since  the  above  jobs  are  stopped  or  not running  won't  be  restarted  anymore,  so  drop  the master table. The master  tables  above  are  SYS_EXPORT_SCHEMA_01,   SYS_EXPORT_SCHEMA_03, SYS_EXPORT_SCHEMA_02) .
SQL> drop table  system.SYS_EXPORT_SCHEMA_03 ; 
Table dropped.
SQL> drop table  system.SYS_EXPORT_SCHEMA_01 ; 
Table dropped.
SQL> drop table  system.SYS_EXPORT_SCHEMA_02 ; 
Table dropped.


4.) Check  for  existing  data  pump  jobs 
Now check the existing datapump job by  query  issued  in  step 1.  If  objects  are  in  recyclebin  then purge the objects from the recyclebin. 
SQL> SELECT owner_name, job_name, operation, job_mode, state, attached_sessions from dba_datapump_jobs;
No row selected
SQL> purge table system.SYS_EXPORT_SCHEMA_01;
Table purged.
SQL> purge table system.SYS_EXPORT_SCHEMA_02;
Table purged
SQL> purge table system.SYS_EXPORT_SCHEMA_03;
Table purged

SQL>select owner_name, job_name, operation, job_mode, state, attached_sessions from dba_datapump_jobs; 
 no rows selected

Note : 

1.)  Orphaned  Data Pump jobs do  not have  an impact on  new Data  Pump jobs. The view  dba_datapump_jobs is a  view, based on  gv$datapump_job,  obj$, com$,  and  user$. The view  shows  the  Data  Pump  jobs  that are   still  running,  or  jobs  for   which  the  master  table  was  kept  in  the  database, or in  case of  an  abnormal  end  of  the  Data  Pump  job (the orphaned job).  If a  new Data  Pump  job  is  started,  a  new  entry will  be  created,  which  has  no  relation  to  the  old Data  Pump  jobs.

2.)  When starting the new Data Pump job and using a system generated name, we check the names of existing Data Pump jobs in the dba_datapump_job in order to obtain a unique new system generated jobname. Naturally, there needs to be enough free space for the new master table to be created in the schema that started the new Data Pump job.

3.) A Data  Pump job is  not the same as  a job that is defined  with  DBMS_JOBS. Jobs created with DBMS_JOBS use there own processes. Data  Pump jobs  use a  master process  and  worker  process(es). In case a   Data Pump   still is temporary  stopped  (STOP_JOB while   in interactive   command mode),  the  Data Pump  job  still exists  in  the  database   (status: NOT RUNNING),  while the  master  and worker  process(es)  are stopped  and  do  not  exist  anymore. The  client  can  attach  to  the  job  at a later time, and continue the job execution (START_JOB).

4.) The  possibility  of corruption when the master table of an active Data Pump job is deleted, depends on the Data Pump job. If the job is an export job, corruption is unlikely as the drop of the master table will only cause the Data Pump master and worker processes to abort. This situation is similar to aborting an export of the original export client.



Enjoy     :-) 


Resume and Attach Data Pump Operation

ATTACH parameter is one of the  useful parameter in oracle Data Pump. This can be used in environment where an datapump operation is going on and we have to resume the operation for the while and continue it later.  This can be done by use the attached parameter in  Data pump.

This allows the job to be more easily identified when querying the DBA_DATAPUMP_JOBS  data dictionary views. If a job name is not specified, Oracle will create a default job name for you. The default name will appear like this SYS_EXPORT_mode_nn , where mode denotes the type of export (FULL, TABLE, etc.) and "nn" is a incrementing number starting at 01. The job name is then used to attach, and reattach, to running jobs using the ATTACH parameter.

The ATTACH parameter attaches the client session to an existing export job and automatically places  in the interactive-command interface. Export displays a description of the job to which you are attached and also displays the export prompt.

For a successful attachment we need to have the uncorrupted dump file in the proper location, else we won't be able to attach. we will see how to use the attach parameter. Below is a demo of attach parameter .

C:\>expdp system/xxxx@noida  directory=datapump1   schemas=hr   dumpfile=hr_schema  logfile=ex_hrlog.log
 Export: Release 11.1.0.6.0 - Production on Wednesday, 13 April, 2011 14:21:25
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  system/********@noida directory=datapump1 dumpfile=hr_schema logfile=ex_hrlog.log schemas=hr
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 448 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

(Press CTR + C to resume the job and perform following steps on export prompt )

Export> stop_job
Are you sure you wish to stop this job ([yes]/no): y

Now the find out the job name to attach

C:\>set ORACLE_SID=noida
C:\>sqlplus / as sysdba
SQL*Plus: Release 11.1.0.6.0 - Production on Wed Apr 13 14:27:31 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select  owner_name , job_name , operation, job_mode from dba_datapump_jobs;
OWNER_NAME                     JOB_NAME                         OPERATION                 JOB_MODE
-------------------------         ------------------------------    ------------------------    -----------------
SYSTEM                         SYS_EXPORT_SCHEMA_01        EXPORT                   SCHEMA

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

C:\>expdp system/ramtech@noida attach=SYS_EXPORT_SCHEMA_01
Export: Release 11.1.0.6.0 - Production on Wednesday, 13 April, 2011 14:36:26
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Job: SYS_EXPORT_SCHEMA_01
  Owner: SYSTEM
  Operation: EXPORT
  Creator Privs: TRUE
  GUID: A370CE1FA5154C27955DF34D010CB822
  Start Time: Wednesday, 13 April, 2011 14:37:27
  Mode: SCHEMA
  Instance: noida
  Max Parallelism: 1
  EXPORT Job Parameters:
  Parameter Name      Parameter Value:
     CLIENT_COMMAND        system/********@noida directory=datapump1 dumpfile=hr_schema logfile=ex_hrlog.log schemas=hr
  State: IDLING
  Bytes Processed: 0
  Current Parallelism: 1
  Job Error Count: 0
  Dump File: d:\dpump\hr_schema.dmp
    bytes written: 28,672
Worker 1 Status:
  Process Name: DW01
  State: UNDEFINED

Export> CONTINUE_CLIENT

Job SYS_EXPORT_SCHEMA_01 has been reopened at Wednesday, 13 April, 2011 14:37
Restarting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  system/********@noida directory=datapump1 dumpfile=hr_schema logfile=ex_hrlog.log schemas=hr
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
. . exported "HR"."COUNTRIES"                            6.375 KB      25 rows
. . exported "HR"."DEPARTMENTS"                          7.015 KB      27 rows
. . exported "HR"."EMPLOYEES"                            16.80 KB     107 rows
. . exported "HR"."JOBS"                                 6.984 KB      19 rows
. . exported "HR"."JOB_HISTORY"                          7.054 KB      10 rows
. . exported "HR"."LOCATIONS"                            8.273 KB      23 rows
. . exported "HR"."REGIONS"                              5.484 KB       4 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
  d:\dpump\hr_schema.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 14:44:00


Enjoy     J J J



What is SQL Injection ?

SQL Injection  :  SQL injection is a basic attack used to either gain unauthorized access to a database or to retrieve information directly from the database.SQL injection attacks are simple in nature – an attacker passes string input to an application in hopes manipulating the SQL statement to his or her advantage.


Oracle may provide stronger and more inherent protections against SQL injection attacks than other database, however applications without proper defenses against these types of attacks can be vulnerable.Despite these advantages many web applications are vulnerable to SQL injection attacks.

Oracle has generally faired well against SQL injection attacks as there is no multiple SQL statement support (SQL Server and PostgreSQL), no EXECUTE statement (SQL Server), and no INTO OUTFILE function (MySQL). Also, use of bind variables in Oracle environments for performance reasons provides strong protection against SQL injection attacks.
Generally Four Types of SQL Injection Attack. These are :- 

1.) SQL Manipulation
 
2.) Code Injection
 
3.) Function Call Injection
 
4.) Buffer Overflows

All of these types of SQL injection are valid for databases including SQL Server, DB2, MySQL,PostgreSQL and Oracle.Among these SQL Manipulation is more important. and vulnerable. Let's have a look

1.) SQL Manipulation :   SQL Injection attack is SQL manipulation. The attacker attempts to modify the existing SQL statement by adding elements to the WHERE clause or extending the SQL statement with set operators like UNION, INTERSECT, or MINUS. 
In this example in first query I add in where clause just "OR 'A'='A' " then what happens.

In second query i add union clause just 
" UNION
SELECT USERNAME FROM USER_MAS
WHERE USERNAME LIKE '%' "
Example:-

SQL> conn scott/tiger@noida
Connected.
SQL> create table aa (username varchar2(20),password varchar2(20));
 Table created.
 SQL> insert into aa values ('SCOTT','TIGER');
 1 row created.
 SQL> insert into aa values ('SCOTT1','TIGER1');
 1 row created.
 SQL> insert into user_mas values ('SCOTT2','TIGER2');
 1 row created.

 SQL> COMMIT;
 Commit complete.

SQL> select * from aa ;
 USERNAME                PASSWORD
--------------------    --------------------
SCOTT TIGER
SCOTT1 TIGER1
SCOTT2 TIGER2

SQL> select username from aa where username ='SCOTT' and password='TIGER' ; 

USERNAME
--------------------
SCOTT

SELECT USERNAME FROM AA  WHERE USERNAME='SCOTT' AND PASSWORD='TIGER' OR 'A'='A'
SQL> /
 USERNAME
--------------------
SCOTT
SCOTT1
SCOTT2

SELECT USERNAME FROM AA  WHERE USERNAME='SCOTT' AND PASSWORD='TIGER'
SQL> /
 USERNAME
--------------------
SCOTT

SELECT USERNAME FROM AA WHERE USERNAME='HALIM' AND PASSWORD='TEST'
 UNION SELECT USERNAME FROM AA WHERE USERNAME LIKE '%'
 USERNAME
--------------------
SCOTT2
SCOTT1
SCOTT

 PLSQL Codes subject to SQL injection attacks :
 SQL statements can be executed four different ways in PL/SQL  –
 
a) embedded SQL,
b) cursors,
 
c) execute immediate statements,
 
d) the DBMS_SQL package.
 

A.) Embedded SQL statements and static cursors only allow bind variables, this is not subject to SQL injection attacks.But following are subject to SQL injection attacks.

B.) Dynamic cursors.(ref cursor) [To prevent SQL INJECTION attack bind variables should always be used]

CREATE OR REPLACE PROCEDURE pro_sample(P_empno IN VARCHAR2)
 
AS
 
sql_s VARCHAR2;
 
BEGIN
 
sql_s := 'SELECT * FROM emp WHERE empno = ''' || p_empno || '''';
 
OPEN cursor_states FOR sql_s;
 
LOOP FETCH cursor_states
 
INTO rec_state
 
EXIT WHEN cursor_states%NOTFOUND;
 
END LOOP;
 
CLOSE cursor_status;
 
END;
This can subject to attack by Sql injection
 

C.) EXECUTE IMMEDIATE [To prevent SQL INJECTION attack bind variables should always be used]

D) DBMS_SQL package (for dynamic SQL statements)
 
[To prevent SQL INJECTION attack bind variables should always be used]
 To prevent SQL injection and to improve application performance, bind variables should always be used.

Monday, April 11, 2011

Question On Oracle Data Pump

Here are some question related to Data Pump which will help you to clear your doubt regarding Data Pump.

1.) What is Oracle Data Pump?
Oracle Data Pump is a new feature of Oracle  Database 10g  that provides  high speed, parallel,bulk data and metadata movement of Oracle database contents. A new public interface package, DBMS_DATAPUMP, provides a server-side infrastructure for fast data and metadata movement. In Oracle Database 10g, new Export (expdp) and Import (impdp) clients that use this interface have been provided. Oracle recommends that customers use these new Data Pump Export and  Import clients rather than the Original Export and Import clients, since the new utilities have vastly improved performance and greatly enhanced functionality.

2.)  Is Data Pump a feature or an option of Oracle 10g?
Data Pump is a fully integrated feature of Oracle Database 10g. Data Pump is installed automatically during database creation and database upgrade.

3.) What platforms is Data Pump provided on?
Data Pump is available on the Oracle Database 10g Standard Edition, Enterprise Edition, and Personal Edition. However, the  parallel capability  is only available on  Oracle10g   Enterprise Edition. Data Pump is  included on all the same platforms  supported  by Oracle 10g, including Unix, Linux, Windows NT, Windows 2000, and Windows XP.

4.) What are the system requirements for Data Pump?
The  Data  Pump  system requirements  are  the same  as  the  standard Oracle  Database 10g requirements. Data Pump doesn’t need a lot of additional system or database resources, but the time to extract and treat the information will be dependent on the CPU and memory available on each  machine. If  system  resource  consumption becomes  an issue  while a Data Pump job is executing,  the  job  can be  dynamically  throttled to  reduce the number  of  execution  threads.

5.) What is the performance gain of Data Pump Export versus Original Export?
Using  the  Direct  Path  method  of  unloading, a single  stream  of data  unload is about 2 times faster than  original  Export  because  the Direct  Path API  has been  modified to be  even  more efficient. Depending on the level of parallelism, the level of improvement can be much more.

6.) What is the performance gain of Data Pump Import versus Original Import?
A  single  stream  of data load is 15-45 times  faster  Original  Import. The reason it  is  so much faster  is that Conventional  Import uses only  conventional mode  inserts, whereas  Data Pump Import uses the Direct Path  method  of  loading. As  with Export, the  job can be parallelized for even more improvement.

7.) Does Data Pump require special tuning to attain performance gains?
No, Data Pump requires no special tuning. It runs optimally “out of the box”. Original Export and (especially) Import require careful tuning to achieve optimum results.

8.) Why are directory objects needed?
They are needed to ensure data security and integrity. Otherwise, users would be able to read data that they should not have access to and perform unwarranted operations on the server.

9.) What makes Data Pump faster than original Export and Import?
There are three main reasons that Data Pump is faster than original Export and Import. First,the Direct Path data access method (which permits the server to bypass SQL and go right to the data blocks on disk) has been rewritten to be much more efficient and now supports Data Pump Import and Export. Second, because Data Pump does its processing on the server rather than in the client, much less data has to be moved between client and server. Finally, Data Pump was designed from the ground up to take advantage of modern hardware and operating system architectures in ways that original Export/ and Import cannot. These factors combine to produce significant performance improvements for Data Pump over original Export and Import .

10.) How much faster is Data Pump than the original Export and Import utilities?
For a single stream, Data Pump Export is approximately 2 times faster than original Export and Data Pump Import is approximately 15 to 40 times faster than original Import. Speed can be dramatically improved using the PARALLEL parameter.

11.) Why is Data Pump slower on small jobs?
Data Pump was designed for big jobs with lots of data. Each Data Pump job has a master table that has all the information about the job and is needed for restartability. The overhead of creating this master table makes small jobs take longer, but the speed in processing large amounts of data gives Data Pump a significant advantage in medium and larger jobs.

12.) Are original Export and Import going away?
Original Export is being deprecated with the Oracle Database 11g release. Original Import will always be supported so that dump files from earlier releases (release 5.0 and later) will be able to be imported. Original and Data Pump dump file formats are not compatible.

13.) Are Data Pump dump files and original Export and Import dump files compatible?
No, the dump files are not compatible or interchangeable. If you have original Export dump  files, you must use original Import to load them.

14.) How can I monitor my Data Pump jobs to see what is going on?
In interactive mode, you can get a lot of detail through the STATUS command. In SQL, you can query the following views:
  • *       DBA_DATAPUMP_JOBS - all active Data Pump jobs and the state of each job
  • *       USER_DATAPUMP_JOBS – summary of the user’s active Data Pump jobs
  • *       DBA_DATAPUMP_SESSIONS – all active user sessions that are attached to a Data Pump Job
  • *       V$SESSION_LONGOPS – shows all progress on each active Data Pump job

 15.) Can you adjust the level of parallelism dynamically for more or less resource consumption?
Yes, you can dynamically  throttle the number of threads of execution  throughout the lifetime of the job. There is an interactive command mode where you can adjust the level of parallelism. So, for example, you can start up a job during the day with a PARALLEL=2, and then increase it at night to a higher level.

16.) Can I use gzip with Data Pump?
Because Data Pump uses parallel operations to achieve its high performance, you cannot pipe the output of Data Pump export through gzip. Starting in Oracle Database 11g, the  COMPRESSION parameter can be used to compress a Data Pump dump file as it is being created. The  COMPRESSION parameter is available as part of the Advanced Compression Option for Oracle Database 11g

17.)Does Data Pump support all data types?
Yes, all the Oracle database data types are supported via Data Pump’s two data movement mechanisms, Direct Path and External Tables.

18.) What kind of object selection capability is available with Data Pump?
With  Data Pump, there  is  much  more  flexibility  in  selecting  objects  for  unload  and  load operations . You can now unload any subset of database objects  (such as functions, packages, and procedures) and  reload them  on the target platform. Almost  all database object   types can  be excluded or  included in an  operation using  the new  Exclude and Include parameters.

19.) Is it necessary to use the Command line interface or is there a GUI that you can use?
You can either use the Command line interface or the Oracle Enterprise Manager web-based GUI interface.

20.) Can I move a dump file set across platforms, such as from Sun to HP?
Yes, Data Pump handles all the necessary compatibility issues between hardware platforms and operating systems.

21.) Can I take 1 dump file set from my source database and import it into multiple databases?
Yes, a single dump file set can be imported into multiple databases. You can also just import different subsets of the data out of that single dump file set.

22.) Is there a way to estimate the size of an export job before it gets underway?
Yes, you can use the “ESTIMATE ONLY” command to see how much disk space is required for the job’s dump file set before you start the operation.

23.) Can I monitor a Data Pump Export or Import job while the job is in progress?
Yes, jobs can be monitored from any location is going on. Clients may also detach from an executing job without affecting it.

24.) If a job is stopped either voluntarily or involuntarily, can I restart it?
Yes, every Data Pump job creates a Master Table in which the entire record of the job is maintained. The Master Table is the directory to the job, so if a job is stopped for any reason, it can be restarted at a later point in time, without losing any data.

25.) Does Data Pump give me the ability to manipulate the Data Definition Language (DDL)?
Yes, with  Data Pump, it  is  now  possible to change the definition of some  objects  as  they are Created  at  import time. For example, you  can remap  the  source  datafile  name  to the  target datafile name in all DDL statements where the source datafile is referenced. This is really useful if you are moving across platforms with different file system syntax.

26.) Is Network Mode supported on Data Pump?
Yes, Data Pump Export and Import both support a network mode in which the job’s source is a remote oracle instance. This is an overlap of unloading the data, using Export, and loading the data, using Import, so those processes don’t have to be serialized. A database  link  is used for the   network.  You  don’t  have  to worry  about  allocating  file  space  because  there  are  no intermediate dump files.

27.) Does Data Pump support Flashback?
Yes, Data Pump supports the Flashback infrastructure, so you can perform an export and get a dump file set that is consistent with a specified point in time or SCN.

28.) Can I still use Original Export? Do I have to convert to Data Pump Export?
An Oracle9i compatible Export that operates against Oracle Database 10g will ship with Oracle 10g, but   it  does  not export  Oracle Database  10g features. Also, Data Pump Export  has new Syntax   and a new    client executable, so Original  Export  scripts will  need to change.  Oracle recommends that customers convert to use the Oracle Data Pump Export.

29.) How do I import an old dump file into Oracle 10g? Can I use Original Import or do I have to  convert to Data Pump Import?
Original  Import  will be  maintained  and  shipped  forever, so  that  Oracle  Version 5.0  through Oracle9i  dump  files will  be able  to be loaded into Oracle 10g and later. Data Pump Import can only read Oracle Database 11g (and later) Data Pump Export dump files. Data Pump Import has new syntax and a new client executable, so Original Import scripts will need to change.

30.) When would I use SQL*Loader instead of Data Pump Export and Import?
You would use SQL*Loader to load data from external files into tables of an Oracle database.Many customers use SQL*Loader on a daily basis to load files (e.g. financial feeds) into their databases. Data Pump Export and Import may be used less frequently, but for very important tasks, such as migrating between platforms, moving data between development, test, and production databases, logical database backup, and for application deployment throughout a corporation.

31.)When would I use Transportable Tablespaces instead of Data Pump Export and Import?
You would use Transportable Tablespaces when you want to move an entire tablespace of data from one Oracle database to another. Transportable Tablespaces allows Oracle data files to be unplugged from a database, moved or copied to another location, and then plugged into another database. Moving data using Transportable Tablespaces can be much faster than performing either an export or import of the same data, because transporting a tablespace only requires the copying of datafiles and integrating the tablespace dictionary information. Even when transporting a tablespace, Data Pump Export and Import are still used to handle the extraction and recreation of the metadata for that tablespace.

Conclusion
Data Pump is fast and flexible. It replaces original Export and Import starting in Oracle Database 10g.Moving to Data Pump is easy, and opens up a world of new options and features.


Reference : http://www.scribd.com


Enjoy      :-)