Wednesday, April 13, 2011

Delete or Cleanup Orphaned Data Pump Jobs

Sometimes we need stop the datapump operation due to some reason and continue it later (as describe in previous post) .  To cleanup orphaned datapump jobs we perform the following steps.

1.)  Check the orphaned datapump jobs.

SQL>select owner_name,job_name,operation,job_mode,state,attached_sessions from dba_datapump_jobs; 
OWNER_NAME                     JOB_NAME                       OPERATION                      JOB_MODE                       STATE                          ATTACHED_SESSIONS
------------------------------ ------------------------------ ------------------------------ ------------------------------ ------------------------------ -----------------
SYSTEM                         SYS_EXPORT_SCHEMA_01           EXPORT                         SCHEMA                 NOT RUNNING                                    0
SYSTEM                         SYS_EXPORT_SCHEMA_03           EXPORT                         SCHEMA                 NOT RUNNING                                    0
SYSTEM                         SYS_EXPORT_SCHEMA_02           EXPORT                         SCHEMA                 NOT RUNNING                                    0

2.)  Check the status of  "state"  field
For orphaned jobs the state will be NOT RUNNING. So from the output we can say all the three are orphaned jobs. Orphaned Data Pump jobs do not have an impact on new Data Pump jobs. If a new Data Pump job is started, a new entry will be created, which has no relation to the old Data Pump jobs.

3.)  Drop the master table  
Since  the  above  jobs  are  stopped  or  not running  won't  be  restarted  anymore,  so  drop  the master table. The master  tables  above  are  SYS_EXPORT_SCHEMA_01,   SYS_EXPORT_SCHEMA_03, SYS_EXPORT_SCHEMA_02) .
SQL> drop table  system.SYS_EXPORT_SCHEMA_03 ; 
Table dropped.
SQL> drop table  system.SYS_EXPORT_SCHEMA_01 ; 
Table dropped.
SQL> drop table  system.SYS_EXPORT_SCHEMA_02 ; 
Table dropped.


4.) Check  for  existing  data  pump  jobs 
Now check the existing datapump job by  query  issued  in  step 1.  If  objects  are  in  recyclebin  then purge the objects from the recyclebin. 
SQL> SELECT owner_name, job_name, operation, job_mode, state, attached_sessions from dba_datapump_jobs;
No row selected
SQL> purge table system.SYS_EXPORT_SCHEMA_01;
Table purged.
SQL> purge table system.SYS_EXPORT_SCHEMA_02;
Table purged
SQL> purge table system.SYS_EXPORT_SCHEMA_03;
Table purged

SQL>select owner_name, job_name, operation, job_mode, state, attached_sessions from dba_datapump_jobs; 
 no rows selected

Note : 

1.)  Orphaned  Data Pump jobs do  not have  an impact on  new Data  Pump jobs. The view  dba_datapump_jobs is a  view, based on  gv$datapump_job,  obj$, com$,  and  user$. The view  shows  the  Data  Pump  jobs  that are   still  running,  or  jobs  for   which  the  master  table  was  kept  in  the  database, or in  case of  an  abnormal  end  of  the  Data  Pump  job (the orphaned job).  If a  new Data  Pump  job  is  started,  a  new  entry will  be  created,  which  has  no  relation  to  the  old Data  Pump  jobs.

2.)  When starting the new Data Pump job and using a system generated name, we check the names of existing Data Pump jobs in the dba_datapump_job in order to obtain a unique new system generated jobname. Naturally, there needs to be enough free space for the new master table to be created in the schema that started the new Data Pump job.

3.) A Data  Pump job is  not the same as  a job that is defined  with  DBMS_JOBS. Jobs created with DBMS_JOBS use there own processes. Data  Pump jobs  use a  master process  and  worker  process(es). In case a   Data Pump   still is temporary  stopped  (STOP_JOB while   in interactive   command mode),  the  Data Pump  job  still exists  in  the  database   (status: NOT RUNNING),  while the  master  and worker  process(es)  are stopped  and  do  not  exist  anymore. The  client  can  attach  to  the  job  at a later time, and continue the job execution (START_JOB).

4.) The  possibility  of corruption when the master table of an active Data Pump job is deleted, depends on the Data Pump job. If the job is an export job, corruption is unlikely as the drop of the master table will only cause the Data Pump master and worker processes to abort. This situation is similar to aborting an export of the original export client.



Enjoy     :-) 


4 comments:

Jim Thompson said...

This does not work. I deleted the not running jobs and there must still be more to do. Ora-31626 on any attempted impdp job.

Anonymous said...

Nice and simple solution of the problem. Thanks a ton..

Anonymous said...

Thanks, Just got what I want.

Anonymous said...

Worked fine for me and is easier to script then with expdp command.

Thanks!
Martin