Wednesday, April 13, 2011

Delete or Cleanup Orphaned Data Pump Jobs

Sometimes we need stop the datapump operation due to some reason and continue it later (as describe in previous post) .  To cleanup orphaned datapump jobs we perform the following steps.

1.)  Check the orphaned datapump jobs.

SQL>select owner_name,job_name,operation,job_mode,state,attached_sessions from dba_datapump_jobs; 
OWNER_NAME                     JOB_NAME                       OPERATION                      JOB_MODE                       STATE                          ATTACHED_SESSIONS
------------------------------ ------------------------------ ------------------------------ ------------------------------ ------------------------------ -----------------
SYSTEM                         SYS_EXPORT_SCHEMA_01           EXPORT                         SCHEMA                 NOT RUNNING                                    0
SYSTEM                         SYS_EXPORT_SCHEMA_03           EXPORT                         SCHEMA                 NOT RUNNING                                    0
SYSTEM                         SYS_EXPORT_SCHEMA_02           EXPORT                         SCHEMA                 NOT RUNNING                                    0

2.)  Check the status of  "state"  field
For orphaned jobs the state will be NOT RUNNING. So from the output we can say all the three are orphaned jobs. Orphaned Data Pump jobs do not have an impact on new Data Pump jobs. If a new Data Pump job is started, a new entry will be created, which has no relation to the old Data Pump jobs.

3.)  Drop the master table  
Since  the  above  jobs  are  stopped  or  not running  won't  be  restarted  anymore,  so  drop  the master table. The master  tables  above  are  SYS_EXPORT_SCHEMA_01,   SYS_EXPORT_SCHEMA_03, SYS_EXPORT_SCHEMA_02) .
SQL> drop table  system.SYS_EXPORT_SCHEMA_03 ; 
Table dropped.
SQL> drop table  system.SYS_EXPORT_SCHEMA_01 ; 
Table dropped.
SQL> drop table  system.SYS_EXPORT_SCHEMA_02 ; 
Table dropped.


4.) Check  for  existing  data  pump  jobs 
Now check the existing datapump job by  query  issued  in  step 1.  If  objects  are  in  recyclebin  then purge the objects from the recyclebin. 
SQL> SELECT owner_name, job_name, operation, job_mode, state, attached_sessions from dba_datapump_jobs;
No row selected
SQL> purge table system.SYS_EXPORT_SCHEMA_01;
Table purged.
SQL> purge table system.SYS_EXPORT_SCHEMA_02;
Table purged
SQL> purge table system.SYS_EXPORT_SCHEMA_03;
Table purged

SQL>select owner_name, job_name, operation, job_mode, state, attached_sessions from dba_datapump_jobs; 
 no rows selected

Note : 

1.)  Orphaned  Data Pump jobs do  not have  an impact on  new Data  Pump jobs. The view  dba_datapump_jobs is a  view, based on  gv$datapump_job,  obj$, com$,  and  user$. The view  shows  the  Data  Pump  jobs  that are   still  running,  or  jobs  for   which  the  master  table  was  kept  in  the  database, or in  case of  an  abnormal  end  of  the  Data  Pump  job (the orphaned job).  If a  new Data  Pump  job  is  started,  a  new  entry will  be  created,  which  has  no  relation  to  the  old Data  Pump  jobs.

2.)  When starting the new Data Pump job and using a system generated name, we check the names of existing Data Pump jobs in the dba_datapump_job in order to obtain a unique new system generated jobname. Naturally, there needs to be enough free space for the new master table to be created in the schema that started the new Data Pump job.

3.) A Data  Pump job is  not the same as  a job that is defined  with  DBMS_JOBS. Jobs created with DBMS_JOBS use there own processes. Data  Pump jobs  use a  master process  and  worker  process(es). In case a   Data Pump   still is temporary  stopped  (STOP_JOB while   in interactive   command mode),  the  Data Pump  job  still exists  in  the  database   (status: NOT RUNNING),  while the  master  and worker  process(es)  are stopped  and  do  not  exist  anymore. The  client  can  attach  to  the  job  at a later time, and continue the job execution (START_JOB).

4.) The  possibility  of corruption when the master table of an active Data Pump job is deleted, depends on the Data Pump job. If the job is an export job, corruption is unlikely as the drop of the master table will only cause the Data Pump master and worker processes to abort. This situation is similar to aborting an export of the original export client.



Enjoy     :-) 


Resume and Attach Data Pump Operation

ATTACH parameter is one of the  useful parameter in oracle Data Pump. This can be used in environment where an datapump operation is going on and we have to resume the operation for the while and continue it later.  This can be done by use the attached parameter in  Data pump.

This allows the job to be more easily identified when querying the DBA_DATAPUMP_JOBS  data dictionary views. If a job name is not specified, Oracle will create a default job name for you. The default name will appear like this SYS_EXPORT_mode_nn , where mode denotes the type of export (FULL, TABLE, etc.) and "nn" is a incrementing number starting at 01. The job name is then used to attach, and reattach, to running jobs using the ATTACH parameter.

The ATTACH parameter attaches the client session to an existing export job and automatically places  in the interactive-command interface. Export displays a description of the job to which you are attached and also displays the export prompt.

For a successful attachment we need to have the uncorrupted dump file in the proper location, else we won't be able to attach. we will see how to use the attach parameter. Below is a demo of attach parameter .

C:\>expdp system/xxxx@noida  directory=datapump1   schemas=hr   dumpfile=hr_schema  logfile=ex_hrlog.log
 Export: Release 11.1.0.6.0 - Production on Wednesday, 13 April, 2011 14:21:25
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  system/********@noida directory=datapump1 dumpfile=hr_schema logfile=ex_hrlog.log schemas=hr
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 448 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

(Press CTR + C to resume the job and perform following steps on export prompt )

Export> stop_job
Are you sure you wish to stop this job ([yes]/no): y

Now the find out the job name to attach

C:\>set ORACLE_SID=noida
C:\>sqlplus / as sysdba
SQL*Plus: Release 11.1.0.6.0 - Production on Wed Apr 13 14:27:31 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select  owner_name , job_name , operation, job_mode from dba_datapump_jobs;
OWNER_NAME                     JOB_NAME                         OPERATION                 JOB_MODE
-------------------------         ------------------------------    ------------------------    -----------------
SYSTEM                         SYS_EXPORT_SCHEMA_01        EXPORT                   SCHEMA

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

C:\>expdp system/ramtech@noida attach=SYS_EXPORT_SCHEMA_01
Export: Release 11.1.0.6.0 - Production on Wednesday, 13 April, 2011 14:36:26
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Job: SYS_EXPORT_SCHEMA_01
  Owner: SYSTEM
  Operation: EXPORT
  Creator Privs: TRUE
  GUID: A370CE1FA5154C27955DF34D010CB822
  Start Time: Wednesday, 13 April, 2011 14:37:27
  Mode: SCHEMA
  Instance: noida
  Max Parallelism: 1
  EXPORT Job Parameters:
  Parameter Name      Parameter Value:
     CLIENT_COMMAND        system/********@noida directory=datapump1 dumpfile=hr_schema logfile=ex_hrlog.log schemas=hr
  State: IDLING
  Bytes Processed: 0
  Current Parallelism: 1
  Job Error Count: 0
  Dump File: d:\dpump\hr_schema.dmp
    bytes written: 28,672
Worker 1 Status:
  Process Name: DW01
  State: UNDEFINED

Export> CONTINUE_CLIENT

Job SYS_EXPORT_SCHEMA_01 has been reopened at Wednesday, 13 April, 2011 14:37
Restarting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  system/********@noida directory=datapump1 dumpfile=hr_schema logfile=ex_hrlog.log schemas=hr
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
. . exported "HR"."COUNTRIES"                            6.375 KB      25 rows
. . exported "HR"."DEPARTMENTS"                          7.015 KB      27 rows
. . exported "HR"."EMPLOYEES"                            16.80 KB     107 rows
. . exported "HR"."JOBS"                                 6.984 KB      19 rows
. . exported "HR"."JOB_HISTORY"                          7.054 KB      10 rows
. . exported "HR"."LOCATIONS"                            8.273 KB      23 rows
. . exported "HR"."REGIONS"                              5.484 KB       4 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
  d:\dpump\hr_schema.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 14:44:00


Enjoy     J J J



What is SQL Injection ?

SQL Injection  :  SQL injection is a basic attack used to either gain unauthorized access to a database or to retrieve information directly from the database.SQL injection attacks are simple in nature – an attacker passes string input to an application in hopes manipulating the SQL statement to his or her advantage.


Oracle may provide stronger and more inherent protections against SQL injection attacks than other database, however applications without proper defenses against these types of attacks can be vulnerable.Despite these advantages many web applications are vulnerable to SQL injection attacks.

Oracle has generally faired well against SQL injection attacks as there is no multiple SQL statement support (SQL Server and PostgreSQL), no EXECUTE statement (SQL Server), and no INTO OUTFILE function (MySQL). Also, use of bind variables in Oracle environments for performance reasons provides strong protection against SQL injection attacks.
Generally Four Types of SQL Injection Attack. These are :- 

1.) SQL Manipulation
 
2.) Code Injection
 
3.) Function Call Injection
 
4.) Buffer Overflows

All of these types of SQL injection are valid for databases including SQL Server, DB2, MySQL,PostgreSQL and Oracle.Among these SQL Manipulation is more important. and vulnerable. Let's have a look

1.) SQL Manipulation :   SQL Injection attack is SQL manipulation. The attacker attempts to modify the existing SQL statement by adding elements to the WHERE clause or extending the SQL statement with set operators like UNION, INTERSECT, or MINUS. 
In this example in first query I add in where clause just "OR 'A'='A' " then what happens.

In second query i add union clause just 
" UNION
SELECT USERNAME FROM USER_MAS
WHERE USERNAME LIKE '%' "
Example:-

SQL> conn scott/tiger@noida
Connected.
SQL> create table aa (username varchar2(20),password varchar2(20));
 Table created.
 SQL> insert into aa values ('SCOTT','TIGER');
 1 row created.
 SQL> insert into aa values ('SCOTT1','TIGER1');
 1 row created.
 SQL> insert into user_mas values ('SCOTT2','TIGER2');
 1 row created.

 SQL> COMMIT;
 Commit complete.

SQL> select * from aa ;
 USERNAME                PASSWORD
--------------------    --------------------
SCOTT TIGER
SCOTT1 TIGER1
SCOTT2 TIGER2

SQL> select username from aa where username ='SCOTT' and password='TIGER' ; 

USERNAME
--------------------
SCOTT

SELECT USERNAME FROM AA  WHERE USERNAME='SCOTT' AND PASSWORD='TIGER' OR 'A'='A'
SQL> /
 USERNAME
--------------------
SCOTT
SCOTT1
SCOTT2

SELECT USERNAME FROM AA  WHERE USERNAME='SCOTT' AND PASSWORD='TIGER'
SQL> /
 USERNAME
--------------------
SCOTT

SELECT USERNAME FROM AA WHERE USERNAME='HALIM' AND PASSWORD='TEST'
 UNION SELECT USERNAME FROM AA WHERE USERNAME LIKE '%'
 USERNAME
--------------------
SCOTT2
SCOTT1
SCOTT

 PLSQL Codes subject to SQL injection attacks :
 SQL statements can be executed four different ways in PL/SQL  –
 
a) embedded SQL,
b) cursors,
 
c) execute immediate statements,
 
d) the DBMS_SQL package.
 

A.) Embedded SQL statements and static cursors only allow bind variables, this is not subject to SQL injection attacks.But following are subject to SQL injection attacks.

B.) Dynamic cursors.(ref cursor) [To prevent SQL INJECTION attack bind variables should always be used]

CREATE OR REPLACE PROCEDURE pro_sample(P_empno IN VARCHAR2)
 
AS
 
sql_s VARCHAR2;
 
BEGIN
 
sql_s := 'SELECT * FROM emp WHERE empno = ''' || p_empno || '''';
 
OPEN cursor_states FOR sql_s;
 
LOOP FETCH cursor_states
 
INTO rec_state
 
EXIT WHEN cursor_states%NOTFOUND;
 
END LOOP;
 
CLOSE cursor_status;
 
END;
This can subject to attack by Sql injection
 

C.) EXECUTE IMMEDIATE [To prevent SQL INJECTION attack bind variables should always be used]

D) DBMS_SQL package (for dynamic SQL statements)
 
[To prevent SQL INJECTION attack bind variables should always be used]
 To prevent SQL injection and to improve application performance, bind variables should always be used.