Wednesday, July 27, 2011

Resource Manager In Oracle


Manager (the Resource Manager) enables us to optimize resource allocation among the many concurrent database sessions.Although we can manage resources via profiles a better option is to use the resource manager, by creating resource plans which specify how much of our resources should go to the various consumer groups, we can prioritize users and jobs. The resource manager can kill long running jobs, switch jobs to higher priority all automatically, the DBA can also manually switch users between resource groups.

What Problems Does the Resource Manager Address?
When database resource allocation decisions are left to the operating system, we may encounter the following problems:

Excessive overhead  :  Excessive overhead results from operating system context switching between Oracle Database server processes when the number of server processes is high.

Inefficient scheduling  :  The operating system deschedules database servers while they hold latches, which is inefficient.

Inappropriate allocation of resources  :  The operating system distributes resources equally among all active processes and is unable to prioritize one task over another.

Inability to manage database-specific resources, such as parallel execution servers and active sessions

How Does the Resource Manager Address These Problems?

The Resource Manager helps to overcome these problems by allowing the database more control over how hardware resources are allocated. The Resource Manager enables us to classify sessions into groups based on session attributes, and to then allocate resources to those groups in a way that optimizes hardware utilization for our application environment.

With the Resource Manager, we can:

  • Guarantee certain sessions a minimum amount of processing resources regardless of the load on the system and the number of users.
  • Distribute available processing resources by allocating percentages of CPU time to different users and applications. In a data warehouse, a higher percentage can be given to ROLAP (relational online analytical processing) applications than to batch jobs.
  • Limit the degree of parallelism of any operation performed by members of a group of users.
  • Create an active session pool. An active session pool consists of a specified maximum number of user sessions allowed to be concurrently active within a group of users. The active session pool limits the total number of sessions actively competing for resources, thereby enabling active sessions to make faster progress.
  • Manage runaway sessions or calls by detecting when they consume more than a specified amount of CPU or I/O. Such sessions can be automatically terminated or switched into a different (lower priority) group.
  • Prevent the execution of operations that the optimizer estimates will run for a longer time than a specified limit.
  • Limit the amount of time that a session can be idle. This can be further defined to mean only sessions that are blocking other sessions.

Example: A Simple Resource Plan

                     


There are four elements to the Database Resource Manager (DRM) :

1.) Resource Consumer Group   -  A resource consumer group is a collection of users with similar requirements for resource consumption. Users can be assigned to more than one resource consumer group, but each user's active session can only be assigned to one resource consumer group at a time. For example, I can construct separate consumer groups for OLTP users, one for decision support users and one for users who typically create ad hoc queries.

2.) Resource plan -  A resource plan can also have multiple subplans that specify a finer grain of resource allocation detail. For example, I can create two subplans for my OLTP users, one for high-volume users and one for lower-volume users. Note that a subplan can have more than one parent plan, but Oracle will not permit a subplan to "loop back" upon itself.

3.) Resource allocation method - dictates the specific method we choose to use to allocate resources like CPU.  

4.) Resource Plan Directive -  Resource plan directives allocate resources among the resource consumer groups in the resource plan. Essentially, directives connect resource consumer groups or subplans to their resource plans. There is at least one resource plan directive for each entry in the resource plan.

Resource plan directives use resource allocation methods to determine how the DRM will allocate resources to a resource consumer group or resource plan. Several allocation methods are available:

CPU  :  As its name implies, this method controls how much CPU utilization will be permitted to user sessions for different resource consumer groups within a specific resource plan. CPU utilization is parceled out in declining levels from 1 to 8, and percentages are used to assign how much CPU should be granted to each consumer group at each level. User sessions for a consumer group with the highest level will receive resources before sessions assigned to a lower level are allowed to receive resources, so this method provides a mechanism to insure the highest-priority sessions receive at least some CPU resources.

Automatic Consumer Group Switching :  In some circumstances, I would like to have a user session that has begun to consume too many resources get switched automatically to a lower priority (for example, an ad hoc query user starts a "not in our lifetime" query, or a user runs a large report during peak OLTP processing hours). Using this method, the DRM can be directed to automatically switch such a user session to a different resource consumer group if the session is actively consuming resources beyond a certain amount of time. Even more impressive, this method can estimate an approximate time for the completion of the query and switch the session to a different group before the query is executed.

Active Session Pool Queueing  :  In some cases, I would like to be able to limit the activity for some groups of users based upon their maximum number of active sessions. This method detects when a specified threshold of user sessions has been reached for a consumer group and queues sessions until a "slot" is available. This method can also detect if a queued task has exceeded a specified time threshold, automatically terminate the queued task and return an error.

Maximum Estimated Execution Time  :  The DRM can also be directed to terminate an operation that has exceeded a specified maximum execution time. DRM will terminate the session and return a trappable error.

Maximum Degree of Parallelism  :  The DRM can limit the maximum degree of parallel processing for any operation within the plan.

Undo Pool Quota  :  Finally, the DRM can be directed to monitor the amount of undo (i.e. rollback) space being used by a resource consumer group. When undo utilization reaches the specified threshold, the DRM will terminate the current DML generating the redo log entries; no other members of that consumer group will be allowed to continue DML operations until sufficient redo space is available.

For more detail and examples click the below scripts

http://www.datadisk.co.uk/html_docs/oracle/resource_manager.htm

http://www.rittmanmead.com/2010/01/oracle-database-resource-manager-and-obiee/

http://oradbpedia.com/wiki/DBMS_RESOURCE_MANAGER_-_Overview

http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/dbrm001.htm


Enjoy   :-)



Saturday, July 23, 2011

Auditing in Oracle

Auditing is the monitoring and recording of selected user database actions. Auditing is normally used to:

1.) Investigate suspicious activity. For example, if an unauthorized user is deleting data from tables,the security administrator might decide to audit all connections to the database and all successful and unsuccessful deletions of rows from all tables in the database.

2.) Monitor and gather data about specific database activities. For example, the database administrator can gather statistics about which tables are being updated, how many logical I/Os are performed, or how many concurrent users connect at peak times.

There are two distinct types of auditing
1.) Standard        ->>  auditing based on statement, privilege and object-level
2.) Fine-grained  ->>  auditing on a finer granule which is based on content i.e. value > 10,000


Standard Auditing is further classified into 3 levels of standards auditing.The following are 
 i.)    Statement    ->>  audit all actions on any type of object.
ii.)   Privilege       ->>  audit actions that stem from system privileges.
iii.)  Object-level ->>  specific audit actions like select, update, insert or delete.


For all 3 levels of auditing we can choose to audit by access (audit every time we access) or by session (audit only once per access during the session), we can also auditing on if the access was successful (whenever successful) or not (whenever not successful).

How to eanable auditing : 
Below are some basic methods for auditing : 


1.) AUDIT_SYS_OPERATION : This parameter audits the SYS and SYSDBA connections. when we enable the parameter audit_sys_operations the database will write a trace file of the session action to the udump directory. This parameter should be enables on ALL production databases. Here is the method of enabling auditing .


SQL> alter system set audit_sys_operations=true scope=spfile;
SQL> shut immediate
SQL> startup
SQL> sho parameter audit
NAME                                TYPE                       VALUE
------------------                -----------            --------------------------------------------------------------
audit_file_dest                string               D:\ORACLE\PRODUCT\10.2.0\ADMIN\NOIDA\ADUMP
audit_sys_operations     boolean           TRUE
audit_trail                       string               DB
We can audit sys and sysdba connections from event logfile .(eventvwr.msc)


2.) AUDIT_TRAIL  Parameter : This parameter audit SESSION, DCL, DDL, DML, Select statements to table AUD$ or OS files.This is the easiest and most common method of auditing an oracle database, and this parameter should be set on ALL production databases. The audit_trail parameter can take the following values :

none  = disables auditing
os      = records the information to a o/s file (uses parameter audit_file_dest for the file location)
db      = records the information in the database, use the view dba_audit_trail ( view accesses table sys.aud$ )   to display audit information.
db, extended   = as per the db value but also populates the sqlbind and sqltext clob columns
xml     = audits to the o/s file but in xml format (uses parameter audit_file_dest for the file location)
xml,extended  =  as per xml option but also populates the sqlbind and sqltext clob columns


A basic list of audit option are as : 


==>Turn off auditing : 
SQL> alter system set audit_trail=none scope=spfile;


==> Auditing written to o/s :
SQL> alter system set audit_file_dest='c:\oracle\auditing';
SQL> alter system set audit_trail=os scope=spfile;


Note: if the audit_file_dest is not set then the default location is $oracle_home/rdbms/audit/


To start auditing we can use the below (there are many more options than stated below)
==> On Session    
SQL> audit session by neer;


==> On Table 
SQL> audit table;


==>Table and Specific user   
SQL>audit table by neer;


==>Table, Specific User and Access
SQL> audit table by neer by access;

==>Privilege auditing
SQL>audit create any table;

SQL>audit create any table by neer;


==>Object auditing
SQL> audit select on neer.employees by access whenever successful;

SQL> audit select on neer.employees by access whenever not successful;


==>Disabling audit
SQL>noaudit table;
SQL>noaudit all privileges;
SQL>noaudit create any table by neer;


==>Turn off all auditing
SQL> noaudit all;                              (turn off all statement auditing)

SQL> noaudit all privileges;               (turn off all privilege auditing)
SQL> noaudit all on default;               (turn off all object auditing)


==>Purge audit table
SQL>delete from sys.aud$;
SQL>truncate from sys.aud$;


Here is demo of the Standard auditting  :


C:\>sqlplus / as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Sat Jul 23 13:58:26 2011
Copyright (c) 1982, 2005, Oracle.  All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options


SQL> alter system set audit_trail=db scope=spfile;   
System altered.


SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.


Total System Global Area      167772160  bytes
Fixed Size                             1247876      bytes
Variable Size                         75498876    bytes
Database Buffers                   83886080    bytes
Redo Buffers                         7139328      bytes
Database mounted.
Database opened.
SQL> create user test identified by test
  2  default tablespace users
  3  quota unlimited on users;
User created.


SQL> grant connect to test;
Grant succeeded.


SQL> grant create table,create procedure to test;
Grant succeeded.


SQL> audit select table,update table,insert table,delete table by test by access;
Audit succeeded.


SQL> audit all by test by access;
Audit succeeded.


SQL> conn test/test@noida
Connected.


SQL> create table aud_table(id number);
Table created.


SQL> insert into aud_table values (&A);
Enter value for a: 1
old   1: insert into aud_table values (&A)
new   1: insert into aud_table values (1)
1 row created.


SQL> insert into aud_table values (&A);
Enter value for a: 2
old   1: insert into aud_table values (&A)
new   1: insert into aud_table values (2)
1 row created.


SQL> /
Enter value for a: 3
old   1: insert into aud_table values (&A)
new   1: insert into aud_table values (3)
1 row created.


SQL> update aud_table set id=123 where id=2;
1 row updated.


SQL> select  *  from aud_table;
        ID
----------
         1
       123
         3


SQL> delete from aud_table;
3 rows deleted.


SQL> drop table aud_table;
Table dropped.


SQL> conn / as sysdba
Connected.


SQL> select username,obj_name,action_name from dba_audit_trail
  2       where owner='TEST' order by timestamp;


USERNAME              OBJ_NAME             ACTION_NAME
---------                   ---------------           -------------------
TEST                      AUD_TABLE              CREATE TABLE
TEST                      AUD_TABLE              INSERT
TEST                      AUD_TABLE              INSERT
TEST                      AUD_TABLE              INSERT
TEST                      AUD_TABLE              UPDATE
TEST                      AUD_TABLE              SELECT
TEST                      AUD_TABLE              DELETE
TEST                      AUD_TABLE              DROP TABLE
8 rows selected.


FGA (fine-grained auditing)  :  Fine-grain auditing (FGA) allows us to audit users accessing data of a certain criteria. As per standard auditing we can audit select, insert, update and delete operations. We use the package dbms_fga to add, remove, enable and disable FGA auditing policies, it is virtually impossible to bypass these policies, one nice feature is that we can attach handlers (like a trigger) to the policies which can execute procedures.


There are many options that can be applied to the dbms_fga package, here are some simple examples : 
==> Creating  :  


SQL> begin
dbms_fga.add_policy (
object_schema=>'neer',
object_name=>'employees',
policy_name=>'aud_test',
audit_column=>'salary,commission_pct',
enable=>false,
statement_types=>'select'); 
end;
/


==> Removing  :  


SQL> begin
dbms_fga.drop_policy (
object_schema=>'neer',
object_name=>'employees',
policy_name=>'aud_test');
end;
/


==>Enabling  :  


SQL>begin
dbms_fga.enable_policy (
object_schema=>'neer',
object_name=>'employees',
policy_name=>'aud_test');
end;
/


==> Disabling  :


SQL> begin
dbms_fga.edisable_policy (
object_schema=>'neer',
object_name=>'employees',
policy_name=>'aud_test');
end;
/

Here is Demo of the FGA Auditing  :  

First,we will create the FGA policy say "FGA_AUD"

SQL>  begin
  2  dbms_fga.add_policy (
  3  object_schema=>'test',
  4  object_name=>'hot',
  5  policy_name=>'FGA_AUD',
  6  audit_column=>'id',
  7  enable=>true,
  8  statement_types=>'select,update');
  9  end;
 10  /
PL/SQL procedure successfully completed.

Now we will perform some "select" and "update" statements to audit  :

SQL> update hot set id=2222 where id=123;
0 rows updated.

SQL> update hot set id=2222 where id=2;
1 row updated.

SQL> select * from hot;
        ID
   ----------
       22
      2222
      2342

SQL> update hot set id=8888 where id=2342;
1 row updated.

SQL> commit;
Commit complete.

To audit the table and the statements fired by db_user, fire the below query :

SQL> select   db_user  ,scn,  sql_text   from  dba_fga_audit_trail   where   db_user='TEST';
db_user         SCN                        SQL_TEXT
--------        ----------              ------------------------------------
TEST          1186412              update hot set id=2222 where id=123
TEST          1186421              update hot set id=2222 where id=2
TEST          1186431              select * from hot
TEST          1186440              update hot set id=8888 where id=2342

SYS.AUD$ table :  Make sure that the sys.aud$ table gets purged from time to time as connections and DML activity in the database might come to a stand still if it becomes full.


 Purge Audit Table : 
SQL> delete from sys.aud$;
SQL> truncate from sys.aud$;   

Friday, July 22, 2011

Create Read-only user for a Schema


Sometimes the DBA requires  to create the read-only user for a particular schemas.While creating the user,one should always keep in mind that the user should have minimum privileges.  There is no any direct command to create the read-only user in oracle.Sometimes I find people grants  “read any table ”  privileges to create the read-only  user ,which is not correct . Here is demo for creating read-only users. 
In this demo, I have created a user name “neer” and given  access to select only  “hr”  schemas tables.

Step  1  :  Create User  “NEER”

SQL> create user neer identified by neer;
User created.

Step  2  :  Grant session and create synonym privileges

SQL> grant create session ,create synonym to  neer ;
Grant succeeded.

Step  3  :  Make script to grant  select privileges to neer 

Here I have created the script with the help of spool  by selecting the tables of the HR schemas .

SQL> SPOOL  C:\select_privs.sql
SQL> select 'grant select on hr.'||table_name|| '  to neer;' from dba_tables where owner='HR';
SQL> select 'grant select on hr.'||view_name||  '  to neer;' from dba_views where owner='HR';
SQL> spool off

Now , we will check the spool  “select _privs.sql”  and prepare this as script for grant permission.

Step  4  :  Run the script to grant the permission

SQL> @C:\select_privs.sql

The  script  “select_privs.sql”  script  after modification  is .

grant select on hr.REGIONS to neer;                                             
grant select on hr.LOCATIONS to neer;                                           
grant select on hr.DEPARTMENTS to neer;                                         
grant select on hr.JOBS to neer;                                                
grant select on hr.EMPLOYEES to neer;                                           
grant select on hr.JOB_HISTORY to neer;                                         
grant select on hr.COUNTRIES to neer;                                                                
grant select on hr.EMP_DETAILS_VIEW to neer;                                    
  
Step  5  :  Create the synonym    
Connect to “NEER” schemas and create the synonym so that the “NEER” user can access the “HR”  schemas without any  dot(.)  like  “select * from employees”   instead of   “select * from HR.employees” .Here again we use the above method.

SQL> conn neer/neer@noida
SQL> SPOOL  C:\synonym_privs.sql
SQL> select 'create synonym '||view_name|| '  for HR.'||view_name||';' from all_views where owner='HR';
SQL> select 'create synonym '||table_name|| '  for HR.'||table_name||';' from all_tables where owner='HR';
SQL> spool off

Now we have the script to create the synonym

SQL>@ C:\synonym_privs.sql

Now we have successfully created a read-only user for a particular schemas.

Step  6  :   Check the tables

SQL> select Count(*) from employees ;
  COUNT(*)
----------
       107

SQL> select Count(*) from hr.employees;
  COUNT(*)
----------
       107

SQL> sho user
USER is "NEER"


Enjoy   :-) 



Thursday, July 21, 2011

Five DBA best practices

I have read a very good post of Arup Nanda and wish to share with you all. The post was regarding the best practices of DBA in day to day life. The practices are :


1. ) Multiple Oracle Homes : 
My   favorite  best  practice   is  the  one  about  multiple  Oracle  Homes. Here  it  how  it g oes. When applying  a patch  or  a  patchset, I  recommend  against  applying  to  the  existing  Oracle  Home. Instead, I suggest  creating  a  new  Oracle  Home, and  apply  the  patches  there. I create the first Oracle  Home  at /app/oracle/db_1, for instance.  When a  patch comes  out, I i nstall the  whole  Oracle  software  in a different home -- /app/oracle/db_2 -- and   then apply the patch there. During the process of installation and patch application, the  database is up and running, as it runs off the home /db_1. When the outage window comes, all I have to   do is to shut down Oracle; change Oracle Home to db_2 and bring the database up. If there is a  problem, I can reset the Oracle Home back to the old one.
So, here is the conventional approach:
1.                 Shut down the database
2.                 Apply patch to the Oracle Home
3.                 Start the database
4.                 In case of problems:
5.                 Shut down the database
6.                 Roll back the patch
7.                 Start the database
Steps 2 and 6 could take as much as three hours depending on the amount of patching.


The database is down during these times. 
In the new approach:
1.                 Install new Oracle Home
2.                 Apply the patch to the new Home
3.                 Shut down the database
4.                 Change Oracle Home to the new location
5.                 Start the database
6.                 In case of problems:
7.                 Shut down the database
8.                 Change Oracle Home to the old one
9.                 Start the database


The database is down only during steps 4 and 8, which takes a couple of minutes at the most, not hours.
So, here are the advantages:
1.) The downtime is significantly reduced, to one 60th of the original time.
2. ) The risk is reduced significantly, as there is no rolling back a patch; you just go to the older version.
3.) You can perform a "diff" on these two homes to see what changed. You can see the differences across   multiple homes as well.
4.)You can take several databases running on the same server to the new Oracle Home one by one.
5.) You can see the various Oracle Homes and what patch level they are on using the inventory.


The only negative part is the space consumption -- you need space for two Oracle Homes. But considering a typical Oracle Home takes about 4 GB or less, this aspect of the suggestion is trivial.


2.) Set audit trail to DB : 
Set the audit trail to DB by placing the parameter AUDIT_TRAIL = DB in   the initialization parameter file during  the  database creation. Setting this parameter  does  not  start  the auditing, because an explicit AUDIT  command  must  be  given on the object. But the parameter must be set to a value other than FALSE (the default)  for the command to take  effect. Being a non-dynamic parameter, the database  must                      be bounced to change the value of AUDIT_TRAIL. To save the trouble and avoid an outage,                           always  set  the  value  to  DB, even  if  you  never intend  to audit a nything. It  does  not break anything  and you  will always be ready to audit when the time comes.

3.) Don't use.log : 
Don't use.log as the extension of redo logs. Someone   may run a script to remove all the log files assuming they are redundant and you will end up losing the online redo logs as well, forcing a database recovery. Instead, name them with extension "redo" or "rdo."

4.) Preview RMAN Restore : 
Preview  RMAN  Restore  to  identify  all the  various backup pieces that will be used in the recovery process without doing an actual recovery. This eliminates any surprises from missing pieces during an actual recovery process.

5.) Create a new Oracle user for clients running on the same server as the DB : 
The  Oracle  Database server  software  also contains  the  client  piece, which  allows the  clients  to  connect to the database on the same server. But  as  a best  practice  do  not  use  the  same  user or  the software; use  a new one. For instance, if "oracle"  is  the  user  to  install  Oracle  software,  create  a  new  user  called, say, "oraapp" and  install  the  client-only software using that user. The user "oraapp" should not be part  of  the  dba  or  the oinstall group; so this  user  can't  log  on  to  the  database  as sysdba. Create a new  group  called "appgrp" and assign  the user o raaap  to  this group. All the  application users on the box should also be part of the appgrp group. This way they can use the sqlplus, sqlldr and other executables on the server,  but  be  able  to  connect  a s sysdba.
The  common  practice  is  to  use  the  client  oftware  in  the  same  user  as  the database  software  owner; but s tarting  with 10.2, Oracle has changed  the  security  policy that takes  away the  global  execution permissions  from  the  Oracle Home. So the only option is  to  let  app  users  be  part  of  the  dba  group  or change the permissions on Oracle Home -- both make the database extremely vulnerable.

For more click here 


Enjoy   :-) 


Wednesday, July 20, 2011

LogMiner utility in Oracle 10g

LogMiner is an Oracle utility. Using  LogMiner one  can  query  the contents of online redo log  files and archived log files.It can be used as a powerful data audit tool, as well as a tool for sophisticated data analysis.  The LogMiner tool can help the DBA to the find changed records in redo log  files by using a set of  PL/SQL  procedures  and  functions . 

Log Miner extracts all  DDL  and  DML activity from the redo log files for viewing by a DBA via thedynamic performance view V$LOGMNR_CONTENTS. Internally Oracle uses the Log Miner technology for several other features,such as Flashback Transaction Backout,Streams, and Logical Standby Databases .Most often  LogMiner is used for recovery purposes when the data consists of just a few tables or a single code  change .

LogMiner Configurations : There  are  three  basic  objects  in  a  LogMiner  configuration .The  following are  as  below 

1.) Source Database :  The source  database  is  the  database  that  produces  all  the  redo  log  files  that we  want  LogMiner to  analyze.

2.) LogMiner Dictionary : LogMiner requires a dictionary to translate object  IDs into object names when it returns redo data to us . For  example,  consider the  following  the  SQL  statement 
SQL> insert into hr.jobs(job_id,job_tilte,min_salary,max_salary) values ('TT_WT','Technical Writer', 4000, 11000) ;
Without the dictionary, LogMiner will display:

SQL> insert   into   "UNKNOWN"."OBJ# 45522"("COL 1","COL 2","COL 3","COL 4")  values 
(HEXTORAW('45465f4748'), HEXTORAW('546563686e6963616c20577269746572'), 
HEXTORAW('c229'), HEXTORAW('c3020b')) ;

LogMiner provides three options for supplying the dictionary .The options are 

i.) Using the Online Catalog : Oracle  recommends  that  we  use  this  option  when  we  will  have  access to  the  source  database  from  which  the  redo  log  files  were  created  and  when  no  changes  to  the column  definitions in  the  tables  of  interest  are  anticipated. This  is  the  most  efficient  and  easy-to-use option. To direct LogMiner to use the dictionary currently in use for the database,  specifying  the  online catalog as our dictionary source when we start LogMiner, as follows:

SQL> execute dbms_logmnr.start_logmnr(options=>dbms_logmnr.dict_from_online_catalog);

ii.) Extracting a LogMiner Dictionary to the Redo Log Files : Oracle recommends that we use this option when we do not expect to have access to the source database from which the redo log files were created, or if we anticipate that changes will be made to the column definitions in the tables of interest.To extract a LogMiner dictionary to the redo log files, the database must be open and in ARCHIVELOG mode and archiving must be enabled. While the dictionary is being extracted to the redo log stream, no DDL statements can be executed. Therefore, the dictionary extracted to the redo log files is guaranteed to be consistent  To extract dictionary information to the redo log files, use the DBMS_LOGMNR_D.BUILD procedure with the STORE_IN_REDO_LOGS option. Do not specify a filename or location.

SQL>execute dbms_logmnr_d.build(options=>dbms_logmnr_d.store_in_redo_logs) ; 

iii.) Extracting the LogMiner Dictionary to a Flat File : This option is maintained for backward compatibility with previous releases. This option does not guarantee transactional consistency. Oracle recommends that we use either the online catalog or extract the dictionary from redo log files instead.Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify a filename for the dictionary and a directory path name for the file. This procedure creates the dictionary file. For example, enter the following to create the file dictionary.ora in /oracle/database:

SQL> execute dbms_logmnr_d.build('dictionary.ora', 'c:\dictionary', dbms_logmnr_d.store_in_flat_file);

3.) Redo log files : The redo log files contain the changes made to the database or database dictionary.
  
Here in this case i will demonstrate the "Extracting  the logminer Dictionary to a flat file" . In this case it requires a little bit of setup.

Step 1  :  Make a folder say  "dict"
C:\>mkdir c:\dict

Step 2 : Add the parameter utl_file_dir in initialiation file : This specifies the location where dictionary will be created.
SQL> alter system set utl_file_dir='C:\dict\'  scope=spfile ; 
SQL> shut immediate 
SQL> startup 
SQL> @ORACLE_HOME\RDBMS\ADMIN\utlfile.sql 

Normally oracle records the change vector in the redo log files i.e. just the information that is required to reconstruct the operation at recovery time. If we want additional information in the redo log then we need to enable supplemental logging prior to generating log files that will be analyzed by LogMiner. Therefore, at the very least, we will enable minimal supplemental logging, as the following SQL statement shows:

Step 3 :  Enable the supplemental  logging

SQL> select supplemental_log_data_min from v$database ;
SUPPLEME
--------
NO
Here minimum supplemental logging is not enabled, so enabling the supplement logging 

SQL> alter database  add supplemental log data ; 
Database altered.

Step 4 : Create the dictionary file : 
The dbms_logmnr_d.build procedure  requires  access  to  a directory  where  it  can place the dictionary  file. Because  PL/SQL  procedures  do not normally access user directories, we must specify a directory for use by the dbms_logmnr_d.build procedure or the procedure will fail. To specify a directory, set the initialization parameter,  utl_file_dir, in the initialization parameter file. The PL/SQL procedure  dbms_logmnr_d.build .Specify a filename for the dictionary and a directory path name for the file. This procedure creates the dictionary file.


SQL> begin 
       dbms_logmnr_d.build 
       ( 
         dictionary_filename => 'dictionary.dic', 
         dictionary_location => 'C:\dict', 
         options             => dbms_logmnr_d.store_in_flat_file 
       );
     end;
     /
PL/SQL procedure successfully completed.

This has recorded the dictionary information into the file   "C:\dict\dictionary.dic.


Step 5 : Add list of redolog file to analyzed :
To mine data in the redo log files, LogMiner needs information about which redo log files to mine. Use the dbms_logmnr.add_logfile procedure to create a list of redo log files before we start LogMiner. After the first redo log file has been added to the list, each subsequently added redo log file must be from the same database and associated with the same database resetlogs scn . When using this method, LogMiner need not be connected to the source database.For example, 

SQL> execute dbms_logmnr.add_logfile ( logfilename=> 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\NOIDA\REDO03.LOG' ,  options=>dbms_logmnr.new) ;


To determine which redo log files are being analyzed in the current LogMiner session, we can query the v$logmnr_logs view, which contains one row for each redo log file. 


Step  6 : Start LogMiner :
dbms_logmnr.start_logmnr starts a LogMiner session. It will populate the dictionary view v$logmnr_contents .  v$logminer_contents is only accessible to the current session which has started LogMiner and only until the dbms_logmnr.end_logmnr is called. 


SQL> begin
       dbms_logmnr.start_logmnr 
       (    
        dictfilename => 'C:\dict\dictionary.dic',
        options      => dbms_logmnr.print_pretty_sql + 
                   dbms_logmnr.no_sql_delimiter
                dbms_logmnr.ddl_dict_tracking 
       );
     end;
     /
There could be many options provided with start_logmnr which affects the data representation in v$logmnr_contents e.g.


I.)   dbms_logmnr.print_pretty_sql will format the sql statements to enhance readability.
II.)  dbms_logmnr.no_sql_delimiter will omit the ";" from the end of the sql statements which is useful when sql are meant to be re-executed in PL/SQL routines.dbms_logmnr.
III.) ddl_dict_tracking tracks the DDL statements in the log files.



Step 7 : Copy contents of v$logmnr_contents to a user table : 
When the LogMiner session ends then v$logmnr_contents is no more accessible .  Its always better to copy contents of v$logmnr_contents to a user table and then perform  the analysis as it is quite expensive to query v$logmnr_contents . Moreover, the user table  can be indexed for better query performance .Let's the table name be "myLogAnalysis"

SQL> drop table  myLogAnalysis ;
Table dropped.

SQL> create table myLogAnalysis  as select * from v$logmnr_contents ;
Table created.

Step 8 :  Ends the LogMiner session  
dbms_logmnr.end_logmnr() ends the LogMiner session and v$logmnr_contents is no more accessible but our user table myLogAnalysis is still available which is a copy of v$logmnr_contents.
SQL> begin
       DBMS_LOGMNR.END_LOGMNR();
     end;
     /
PL/SQL procedure successfully completed.

Step 8 : Query the  v$logmnr_contents
The output below shows the system change number for the change, the segment on which the change was made, the sql statement to redo the change and the sql statement to undo the change.

SQL>select scn , seg_name , sql_redo , sql_undo from   myLogAnalysis  where username = 'SCOTT' AND (seg_owner is null OR seg_owner = 'SCOTT') ;
   SCN       SEG_NAME      SQL_REDO                          SQL_UNDO
--------    -----------        --------------------------------- ---------------------------------
639968      TEST_LOGMNR create table scott.test_logmnr
                   (id  number,
                    name varchar2(10)
                    )
640039             set transaction read write
640039 TEST_LOGMNR insert into "SCOTT"."TEST_LOGMNR" delete from "SCOTT"."TEST_LOGMNR"
                    values                            where
                       "ID" = 1,                         "ID" = 1 and
                       "NAME" = 'TEST1'                  "NAME" = 'TEST1' and
                                                         ROWID = 'AAAM7vAAEAAAALcAAA'

640041 TEST_LOGMNR insert into "SCOTT"."TEST_LOGMNR" delete from "SCOTT"."TEST_LOGMNR"
                    values                            where
                       "ID" = 2,                         "ID" = 2 and
                       "NAME" = 'TEST2'                  "NAME" = 'TEST2' and
                                                         ROWID = 'AAAM7vAAEAAAALcAAB'
640044             commit
640047             set transaction read write
640047 TEST_LOGMNR update "SCOTT"."TEST_LOGMNR"      update "SCOTT"."TEST_LOGMNR"
                     set                               set
                       "NAME" = 'TEST'                   "NAME" = 'TEST1'
                     where                             where
                       "NAME" = 'TEST1' and              "NAME" = 'TEST' and
                       ROWID = 'AAAM7vAAEAAAALcAAA'      ROWID = 'AAAM7vAAEAAAALcAAA'

640047 TEST_LOGMNR update "SCOTT"."TEST_LOGMNR"      update "SCOTT"."TEST_LOGMNR"
                     set                               set
                       "NAME" = 'TEST'                   "NAME" = 'TEST2'
                     where                             where
                       "NAME" = 'TEST2' and              "NAME" = 'TEST' and
                       ROWID = 'AAAM7vAAEAAAALcAAB'      ROWID = 'AAAM7vAAEAAAALcAAB'
640050             commit
640052             set transaction read write
640058 TEST_LOGMNR delete from "SCOTT"."TEST_LOGMNR" insert into "SCOTT"."TEST_LOGMNR"
                    where                             values
                       "ID" = 1 and                      "ID" = 1,
                       "NAME" = 'TEST' and               "NAME" = 'TEST'
                       ROWID = 'AAAM7vAAEAAAALcAAA'
640058 TEST_LOGMNR delete from "SCOTT"."TEST_LOGMNR" insert into "SCOTT"."TEST_LOGMNR"
                    where                             values
                       "ID" = 2 and                      "ID" = 2,
                       "NAME" = 'TEST' and               "NAME" = 'TEST'
                       ROWID = 'AAAM7vAAEAAAALcAAB'
640066             commit

Now we have more option to filter the data according to our requirement .Here are few examples:

Filtering Data By Time  : 
To filter data by time, set the STARTTIME and ENDTIME parameters. The procedure expects date values. Use the TO_DATE function to specify date and time, as in this example:

SQL> execute DBMS_LOGMNR.START_LOGMNR(DICTFILENAME =>'c:\dict\dictionary.ora', STARTTIME => TO_DATE('20-Jul-2011 04:30:00', 'DD-MON-YYYY HH:MI:SS'),ENDTIME => TO_DATE('20-Jul-2011  04:45:00', 'DD-MON-YYYY HH:MI:SS')); 

If no STARTTIME or ENDTIME parameters are specified, the entire redo log is read from start to end, for each  SELECT statement issued.
The timestamps should not be used to infer ordering of redo records. we can infer the order of redo records by using the SCN.

Filtering Data By SCN : 
To filter data by SCN (system change number), use the STARTSCN and ENDSCN parameters, as in this example:
SQL> execute  DBMS_LOGMNR.START_LOGMNR (dictfilename=>'c:\dict\dictionary.ora',  STARTSCN => 100, ENDSCN => 150);

The STARTSCN and ENDSCN parameters override the STARTTIME and ENDTIME parameters in situations where all are specified.
If no STARTSCN or ENDSCN parameters are specified, the entire redo log is read from start to end, for each SELECT statement issued.

Accessing LogMiner Information : 
LogMiner information is contained in the following views. We can use SQL to query them as we would any other view.
V$LOGMNR_CONTENTS  :  Shows changes made to user and table information.

V$LOGMNR_DICTIONARY : Shows information about the LogMiner dictionary file, provided the dictionary was created using the STORE_IN_FLAT_FILE option. The information shown includes the database name and status information.

V$LOGMNR_LOGS  :  Shows information about specified redo logs. There is one row for each redo log.

V$LOGMNR_PARAMETERS : Shows information about optional LogMiner parameters, including starting and ending system change numbers (SCNs) and starting and ending times.


For more references


Enjoy   :-)