Thursday, July 21, 2011

Five DBA best practices

I have read a very good post of Arup Nanda and wish to share with you all. The post was regarding the best practices of DBA in day to day life. The practices are :


1. ) Multiple Oracle Homes : 
My   favorite  best  practice   is  the  one  about  multiple  Oracle  Homes. Here  it  how  it g oes. When applying  a patch  or  a  patchset, I  recommend  against  applying  to  the  existing  Oracle  Home. Instead, I suggest  creating  a  new  Oracle  Home, and  apply  the  patches  there. I create the first Oracle  Home  at /app/oracle/db_1, for instance.  When a  patch comes  out, I i nstall the  whole  Oracle  software  in a different home -- /app/oracle/db_2 -- and   then apply the patch there. During the process of installation and patch application, the  database is up and running, as it runs off the home /db_1. When the outage window comes, all I have to   do is to shut down Oracle; change Oracle Home to db_2 and bring the database up. If there is a  problem, I can reset the Oracle Home back to the old one.
So, here is the conventional approach:
1.                 Shut down the database
2.                 Apply patch to the Oracle Home
3.                 Start the database
4.                 In case of problems:
5.                 Shut down the database
6.                 Roll back the patch
7.                 Start the database
Steps 2 and 6 could take as much as three hours depending on the amount of patching.


The database is down during these times. 
In the new approach:
1.                 Install new Oracle Home
2.                 Apply the patch to the new Home
3.                 Shut down the database
4.                 Change Oracle Home to the new location
5.                 Start the database
6.                 In case of problems:
7.                 Shut down the database
8.                 Change Oracle Home to the old one
9.                 Start the database


The database is down only during steps 4 and 8, which takes a couple of minutes at the most, not hours.
So, here are the advantages:
1.) The downtime is significantly reduced, to one 60th of the original time.
2. ) The risk is reduced significantly, as there is no rolling back a patch; you just go to the older version.
3.) You can perform a "diff" on these two homes to see what changed. You can see the differences across   multiple homes as well.
4.)You can take several databases running on the same server to the new Oracle Home one by one.
5.) You can see the various Oracle Homes and what patch level they are on using the inventory.


The only negative part is the space consumption -- you need space for two Oracle Homes. But considering a typical Oracle Home takes about 4 GB or less, this aspect of the suggestion is trivial.


2.) Set audit trail to DB : 
Set the audit trail to DB by placing the parameter AUDIT_TRAIL = DB in   the initialization parameter file during  the  database creation. Setting this parameter  does  not  start  the auditing, because an explicit AUDIT  command  must  be  given on the object. But the parameter must be set to a value other than FALSE (the default)  for the command to take  effect. Being a non-dynamic parameter, the database  must                      be bounced to change the value of AUDIT_TRAIL. To save the trouble and avoid an outage,                           always  set  the  value  to  DB, even  if  you  never intend  to audit a nything. It  does  not break anything  and you  will always be ready to audit when the time comes.

3.) Don't use.log : 
Don't use.log as the extension of redo logs. Someone   may run a script to remove all the log files assuming they are redundant and you will end up losing the online redo logs as well, forcing a database recovery. Instead, name them with extension "redo" or "rdo."

4.) Preview RMAN Restore : 
Preview  RMAN  Restore  to  identify  all the  various backup pieces that will be used in the recovery process without doing an actual recovery. This eliminates any surprises from missing pieces during an actual recovery process.

5.) Create a new Oracle user for clients running on the same server as the DB : 
The  Oracle  Database server  software  also contains  the  client  piece, which  allows the  clients  to  connect to the database on the same server. But  as  a best  practice  do  not  use  the  same  user or  the software; use  a new one. For instance, if "oracle"  is  the  user  to  install  Oracle  software,  create  a  new  user  called, say, "oraapp" and  install  the  client-only software using that user. The user "oraapp" should not be part  of  the  dba  or  the oinstall group; so this  user  can't  log  on  to  the  database  as sysdba. Create a new  group  called "appgrp" and assign  the user o raaap  to  this group. All the  application users on the box should also be part of the appgrp group. This way they can use the sqlplus, sqlldr and other executables on the server,  but  be  able  to  connect  a s sysdba.
The  common  practice  is  to  use  the  client  oftware  in  the  same  user  as  the database  software  owner; but s tarting  with 10.2, Oracle has changed  the  security  policy that takes  away the  global  execution permissions  from  the  Oracle Home. So the only option is  to  let  app  users  be  part  of  the  dba  group  or change the permissions on Oracle Home -- both make the database extremely vulnerable.

For more click here 


Enjoy   :-) 


Wednesday, July 20, 2011

LogMiner utility in Oracle 10g

LogMiner is an Oracle utility. Using  LogMiner one  can  query  the contents of online redo log  files and archived log files.It can be used as a powerful data audit tool, as well as a tool for sophisticated data analysis.  The LogMiner tool can help the DBA to the find changed records in redo log  files by using a set of  PL/SQL  procedures  and  functions . 

Log Miner extracts all  DDL  and  DML activity from the redo log files for viewing by a DBA via thedynamic performance view V$LOGMNR_CONTENTS. Internally Oracle uses the Log Miner technology for several other features,such as Flashback Transaction Backout,Streams, and Logical Standby Databases .Most often  LogMiner is used for recovery purposes when the data consists of just a few tables or a single code  change .

LogMiner Configurations : There  are  three  basic  objects  in  a  LogMiner  configuration .The  following are  as  below 

1.) Source Database :  The source  database  is  the  database  that  produces  all  the  redo  log  files  that we  want  LogMiner to  analyze.

2.) LogMiner Dictionary : LogMiner requires a dictionary to translate object  IDs into object names when it returns redo data to us . For  example,  consider the  following  the  SQL  statement 
SQL> insert into hr.jobs(job_id,job_tilte,min_salary,max_salary) values ('TT_WT','Technical Writer', 4000, 11000) ;
Without the dictionary, LogMiner will display:

SQL> insert   into   "UNKNOWN"."OBJ# 45522"("COL 1","COL 2","COL 3","COL 4")  values 
(HEXTORAW('45465f4748'), HEXTORAW('546563686e6963616c20577269746572'), 
HEXTORAW('c229'), HEXTORAW('c3020b')) ;

LogMiner provides three options for supplying the dictionary .The options are 

i.) Using the Online Catalog : Oracle  recommends  that  we  use  this  option  when  we  will  have  access to  the  source  database  from  which  the  redo  log  files  were  created  and  when  no  changes  to  the column  definitions in  the  tables  of  interest  are  anticipated. This  is  the  most  efficient  and  easy-to-use option. To direct LogMiner to use the dictionary currently in use for the database,  specifying  the  online catalog as our dictionary source when we start LogMiner, as follows:

SQL> execute dbms_logmnr.start_logmnr(options=>dbms_logmnr.dict_from_online_catalog);

ii.) Extracting a LogMiner Dictionary to the Redo Log Files : Oracle recommends that we use this option when we do not expect to have access to the source database from which the redo log files were created, or if we anticipate that changes will be made to the column definitions in the tables of interest.To extract a LogMiner dictionary to the redo log files, the database must be open and in ARCHIVELOG mode and archiving must be enabled. While the dictionary is being extracted to the redo log stream, no DDL statements can be executed. Therefore, the dictionary extracted to the redo log files is guaranteed to be consistent  To extract dictionary information to the redo log files, use the DBMS_LOGMNR_D.BUILD procedure with the STORE_IN_REDO_LOGS option. Do not specify a filename or location.

SQL>execute dbms_logmnr_d.build(options=>dbms_logmnr_d.store_in_redo_logs) ; 

iii.) Extracting the LogMiner Dictionary to a Flat File : This option is maintained for backward compatibility with previous releases. This option does not guarantee transactional consistency. Oracle recommends that we use either the online catalog or extract the dictionary from redo log files instead.Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify a filename for the dictionary and a directory path name for the file. This procedure creates the dictionary file. For example, enter the following to create the file dictionary.ora in /oracle/database:

SQL> execute dbms_logmnr_d.build('dictionary.ora', 'c:\dictionary', dbms_logmnr_d.store_in_flat_file);

3.) Redo log files : The redo log files contain the changes made to the database or database dictionary.
  
Here in this case i will demonstrate the "Extracting  the logminer Dictionary to a flat file" . In this case it requires a little bit of setup.

Step 1  :  Make a folder say  "dict"
C:\>mkdir c:\dict

Step 2 : Add the parameter utl_file_dir in initialiation file : This specifies the location where dictionary will be created.
SQL> alter system set utl_file_dir='C:\dict\'  scope=spfile ; 
SQL> shut immediate 
SQL> startup 
SQL> @ORACLE_HOME\RDBMS\ADMIN\utlfile.sql 

Normally oracle records the change vector in the redo log files i.e. just the information that is required to reconstruct the operation at recovery time. If we want additional information in the redo log then we need to enable supplemental logging prior to generating log files that will be analyzed by LogMiner. Therefore, at the very least, we will enable minimal supplemental logging, as the following SQL statement shows:

Step 3 :  Enable the supplemental  logging

SQL> select supplemental_log_data_min from v$database ;
SUPPLEME
--------
NO
Here minimum supplemental logging is not enabled, so enabling the supplement logging 

SQL> alter database  add supplemental log data ; 
Database altered.

Step 4 : Create the dictionary file : 
The dbms_logmnr_d.build procedure  requires  access  to  a directory  where  it  can place the dictionary  file. Because  PL/SQL  procedures  do not normally access user directories, we must specify a directory for use by the dbms_logmnr_d.build procedure or the procedure will fail. To specify a directory, set the initialization parameter,  utl_file_dir, in the initialization parameter file. The PL/SQL procedure  dbms_logmnr_d.build .Specify a filename for the dictionary and a directory path name for the file. This procedure creates the dictionary file.


SQL> begin 
       dbms_logmnr_d.build 
       ( 
         dictionary_filename => 'dictionary.dic', 
         dictionary_location => 'C:\dict', 
         options             => dbms_logmnr_d.store_in_flat_file 
       );
     end;
     /
PL/SQL procedure successfully completed.

This has recorded the dictionary information into the file   "C:\dict\dictionary.dic.


Step 5 : Add list of redolog file to analyzed :
To mine data in the redo log files, LogMiner needs information about which redo log files to mine. Use the dbms_logmnr.add_logfile procedure to create a list of redo log files before we start LogMiner. After the first redo log file has been added to the list, each subsequently added redo log file must be from the same database and associated with the same database resetlogs scn . When using this method, LogMiner need not be connected to the source database.For example, 

SQL> execute dbms_logmnr.add_logfile ( logfilename=> 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\NOIDA\REDO03.LOG' ,  options=>dbms_logmnr.new) ;


To determine which redo log files are being analyzed in the current LogMiner session, we can query the v$logmnr_logs view, which contains one row for each redo log file. 


Step  6 : Start LogMiner :
dbms_logmnr.start_logmnr starts a LogMiner session. It will populate the dictionary view v$logmnr_contents .  v$logminer_contents is only accessible to the current session which has started LogMiner and only until the dbms_logmnr.end_logmnr is called. 


SQL> begin
       dbms_logmnr.start_logmnr 
       (    
        dictfilename => 'C:\dict\dictionary.dic',
        options      => dbms_logmnr.print_pretty_sql + 
                   dbms_logmnr.no_sql_delimiter
                dbms_logmnr.ddl_dict_tracking 
       );
     end;
     /
There could be many options provided with start_logmnr which affects the data representation in v$logmnr_contents e.g.


I.)   dbms_logmnr.print_pretty_sql will format the sql statements to enhance readability.
II.)  dbms_logmnr.no_sql_delimiter will omit the ";" from the end of the sql statements which is useful when sql are meant to be re-executed in PL/SQL routines.dbms_logmnr.
III.) ddl_dict_tracking tracks the DDL statements in the log files.



Step 7 : Copy contents of v$logmnr_contents to a user table : 
When the LogMiner session ends then v$logmnr_contents is no more accessible .  Its always better to copy contents of v$logmnr_contents to a user table and then perform  the analysis as it is quite expensive to query v$logmnr_contents . Moreover, the user table  can be indexed for better query performance .Let's the table name be "myLogAnalysis"

SQL> drop table  myLogAnalysis ;
Table dropped.

SQL> create table myLogAnalysis  as select * from v$logmnr_contents ;
Table created.

Step 8 :  Ends the LogMiner session  
dbms_logmnr.end_logmnr() ends the LogMiner session and v$logmnr_contents is no more accessible but our user table myLogAnalysis is still available which is a copy of v$logmnr_contents.
SQL> begin
       DBMS_LOGMNR.END_LOGMNR();
     end;
     /
PL/SQL procedure successfully completed.

Step 8 : Query the  v$logmnr_contents
The output below shows the system change number for the change, the segment on which the change was made, the sql statement to redo the change and the sql statement to undo the change.

SQL>select scn , seg_name , sql_redo , sql_undo from   myLogAnalysis  where username = 'SCOTT' AND (seg_owner is null OR seg_owner = 'SCOTT') ;
   SCN       SEG_NAME      SQL_REDO                          SQL_UNDO
--------    -----------        --------------------------------- ---------------------------------
639968      TEST_LOGMNR create table scott.test_logmnr
                   (id  number,
                    name varchar2(10)
                    )
640039             set transaction read write
640039 TEST_LOGMNR insert into "SCOTT"."TEST_LOGMNR" delete from "SCOTT"."TEST_LOGMNR"
                    values                            where
                       "ID" = 1,                         "ID" = 1 and
                       "NAME" = 'TEST1'                  "NAME" = 'TEST1' and
                                                         ROWID = 'AAAM7vAAEAAAALcAAA'

640041 TEST_LOGMNR insert into "SCOTT"."TEST_LOGMNR" delete from "SCOTT"."TEST_LOGMNR"
                    values                            where
                       "ID" = 2,                         "ID" = 2 and
                       "NAME" = 'TEST2'                  "NAME" = 'TEST2' and
                                                         ROWID = 'AAAM7vAAEAAAALcAAB'
640044             commit
640047             set transaction read write
640047 TEST_LOGMNR update "SCOTT"."TEST_LOGMNR"      update "SCOTT"."TEST_LOGMNR"
                     set                               set
                       "NAME" = 'TEST'                   "NAME" = 'TEST1'
                     where                             where
                       "NAME" = 'TEST1' and              "NAME" = 'TEST' and
                       ROWID = 'AAAM7vAAEAAAALcAAA'      ROWID = 'AAAM7vAAEAAAALcAAA'

640047 TEST_LOGMNR update "SCOTT"."TEST_LOGMNR"      update "SCOTT"."TEST_LOGMNR"
                     set                               set
                       "NAME" = 'TEST'                   "NAME" = 'TEST2'
                     where                             where
                       "NAME" = 'TEST2' and              "NAME" = 'TEST' and
                       ROWID = 'AAAM7vAAEAAAALcAAB'      ROWID = 'AAAM7vAAEAAAALcAAB'
640050             commit
640052             set transaction read write
640058 TEST_LOGMNR delete from "SCOTT"."TEST_LOGMNR" insert into "SCOTT"."TEST_LOGMNR"
                    where                             values
                       "ID" = 1 and                      "ID" = 1,
                       "NAME" = 'TEST' and               "NAME" = 'TEST'
                       ROWID = 'AAAM7vAAEAAAALcAAA'
640058 TEST_LOGMNR delete from "SCOTT"."TEST_LOGMNR" insert into "SCOTT"."TEST_LOGMNR"
                    where                             values
                       "ID" = 2 and                      "ID" = 2,
                       "NAME" = 'TEST' and               "NAME" = 'TEST'
                       ROWID = 'AAAM7vAAEAAAALcAAB'
640066             commit

Now we have more option to filter the data according to our requirement .Here are few examples:

Filtering Data By Time  : 
To filter data by time, set the STARTTIME and ENDTIME parameters. The procedure expects date values. Use the TO_DATE function to specify date and time, as in this example:

SQL> execute DBMS_LOGMNR.START_LOGMNR(DICTFILENAME =>'c:\dict\dictionary.ora', STARTTIME => TO_DATE('20-Jul-2011 04:30:00', 'DD-MON-YYYY HH:MI:SS'),ENDTIME => TO_DATE('20-Jul-2011  04:45:00', 'DD-MON-YYYY HH:MI:SS')); 

If no STARTTIME or ENDTIME parameters are specified, the entire redo log is read from start to end, for each  SELECT statement issued.
The timestamps should not be used to infer ordering of redo records. we can infer the order of redo records by using the SCN.

Filtering Data By SCN : 
To filter data by SCN (system change number), use the STARTSCN and ENDSCN parameters, as in this example:
SQL> execute  DBMS_LOGMNR.START_LOGMNR (dictfilename=>'c:\dict\dictionary.ora',  STARTSCN => 100, ENDSCN => 150);

The STARTSCN and ENDSCN parameters override the STARTTIME and ENDTIME parameters in situations where all are specified.
If no STARTSCN or ENDSCN parameters are specified, the entire redo log is read from start to end, for each SELECT statement issued.

Accessing LogMiner Information : 
LogMiner information is contained in the following views. We can use SQL to query them as we would any other view.
V$LOGMNR_CONTENTS  :  Shows changes made to user and table information.

V$LOGMNR_DICTIONARY : Shows information about the LogMiner dictionary file, provided the dictionary was created using the STORE_IN_FLAT_FILE option. The information shown includes the database name and status information.

V$LOGMNR_LOGS  :  Shows information about specified redo logs. There is one row for each redo log.

V$LOGMNR_PARAMETERS : Shows information about optional LogMiner parameters, including starting and ending system change numbers (SCNs) and starting and ending times.


For more references


Enjoy   :-) 


Saturday, July 16, 2011

ASMCMD Command Line Tool in Oracle 1og

The ASMCMD tool is a command line utility that allows us to manage ASM instances and the disk structures and files within those instances. With ASMCMD we  can:
List contents of ASM disk groups
Perform searches (like directory listings)
Add or remove directories
Display space availability and utilization

Many of the commands in ASMCMD are UNIX like (i.e. ls, cd) and therefore the ASMCMD is fairly easy to learn how to use.

Starting ASMCMD
To start ASMCMD, set the ORACLE_SID to +ASM(in case of single instance) and then type ASMCMD from the command line as below :

C:\>Set ORACLE_SID=+ASM
C:\>ASMCMD


Or from UNIX:
/opt/oracle>export ORACLE_SID=+ASM
/opt/oracle>ASMCMD


When ASMCMD starts, we  will see the ASMCMD prompt as seen here:
1.ASMCMD>


we  can start ASMCMD with the –p option and it will display the directory level as shown below  :
C:\>ASMCMD -p
ASMCMD [+] >


ASMCMD Commands :    ASMCMD has a basic set of 12 commands in oracle 10.2g .We can see these commands from the ASMCMD prompt if we type  "help"  on the ASMCMD prompt .The logs are as below 
ASMCMD> help
        asmcmd [-p] [command]
The environment variables ORACLE_HOME and ORACLE_SID determine the instance to which the program connects, and ASMCMD establishes a bequeath connection to it, in the same manner as a SQLPLUS / AS SYSDBA.  The user must be a member of the SYSDBA group.


  Specifying the -p option allows the current directory to be displayed  in the command prompt, like so:
        ASMCMD [+DATAFILE/ORCL/CONTROLFILE] >
[command] specifies one of the following commands, along with its  parameters.
        Type "help [command]" to get help on a specific ASMCMD command.
        commands:
        --------
        cd
        du
        find
        help
        ls
        lsct
        lsdg
        mkalias
        mkdir
        pwd
        rm
        rmalias


Now we will check each command on ASMCMD prompt. In my case , i have two diskgroup 'RATCAT' and 'SATMAT'.


1.) cd  –  Change directory : The cd command will move us in and out of ASM directory structures. This command works just like it's UNIX equivelent, and the special designators for previous directory ".." are available for use and use "+" to indicate the root (instead of a forward or backwards slash ( / or \)). Here are some examples:


C:\>asmcmd -p
ASMCMD [+] > cd +RATCAT        
ASMCMD [+RATCAT] > cd test1
ASMCMD [+RATCAT/test1] > cd test22
ASMCMD [+RATCAT/test1/test22] > cd ..
ASMCMD [+RATCAT/test1] > cd ..
ASMCMD [+RATCAT] > cd ..
ASMCMD [+] >


2.) du  –  Disk Used :   Display total space used for files located recursively under [dir],similar to "du -s" under UNIX; default is the current directory.  Two  values are returned, both in units of megabytes.  The first value does not take into account mirroring of the diskgroup while the second does. For instance, if a file occupies 100 MB of space, then it actually  takes up 200 MB of space on a normal redundancy diskgroup and 300 MB of space on a high redundancy diskgroup.


ASMCMD [+] > du
Used_MB        Mirror_used_MB
   2661                2661

3.)  find  –  Find a directory or file:    This command searches the specified directory and all subdirectories below it in the directory tree for the supplied name. name can be a directory name or a filename, and can include wildcard characters. dir may also include wildcards. In the output of the command, directory names are suffixed with the slash character (/) to distinguish them from filenames.


We use the -t flag to find all the files of a particular type (specified as type). For example, we can search for control files by specifying type as CONTROLFILE. Valid values for type are the following:

CONTROLFILE
DATAFILE
ONLINELOG
ARCHIVELOG
TEMPFILE
BACKUPSET
PARAMETERFILE
DATAGUARDCONFIG
FLASHBACK
CHANGETRACKING
DUMPSET
AUTOBACKUP
XTRANSPORT


ASMCMD [+ratcat] > find  +  datafile*
+RATCAT/NOIDA/DATAFILE/


ASMCMD [+ratcat] > find  +  system*
+RATCAT/NOIDA/DATAFILE/SYSTEM.256.755871363
+RATCAT/NOIDA/DATAFILE/SYSTEM.263.756485911


ASMCMD [+] > find  /  system*
+RATCAT/NOIDA/DATAFILE/SYSTEM.256.755871363
+RATCAT/NOIDA/DATAFILE/SYSTEM.263.756485911

4.)  help :  The help command provides the  help for all the ASMCMD commands. we can enter help, followed by the command name, for more detailed information on the command we are interested in.
  
ASMCMD [+] > help du


5.)  ls  :  The ls command is the list file command. This will list the directories and/or files contained within the ASM disk structure. Here are some examples of using the ls command:


ASMCMD [+] > ls
RATCAT/
SATMAT/


ASMCMD [+] > ls -l
State    Type    Rebal  Unbal  Name
MOUNTED  EXTERN  N      N      RATCAT/
MOUNTED  EXTERN  N      N      SATMAT/

6.)  lsct :  The lsct command lists all clients that are using ASM. we can specify a disk group to restrict the output or we can just allow the command to display all the clients. The output of the command displays the databases connected to the ASM instance, the version of the database connected. Also included is the compatible version of the database (set via the COMPATIBLE parameter setting) and the instance name of the instance connected to the ASM instance. Here is an example of the output from the lsct command:

ASMCMD [+] > lsct
DB_Name   Status        Software_Version  Compatible_version  Instance_Name
noida     CONNECTED           10.2.0.1.0          10.2.0.1.0           noida
noida     CONNECTED           10.2.0.1.0          10.2.0.1.0           noida


7.) lsdg : The lsdg command provides information on the disk groups that have been created in an ASM instance. This command returns a great deal of information about the disk group. Of particular interest is the free_MB column, which indicates how much space is available on the disk group. Here is an example of the output of the lsdg command:


ASMCMD [+] > lsdg
State    Type    Rebal  Unbal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Name
MOUNTED  EXTERN  N      N         512   4096  1048576      9088     7043                0            7043              0  RATCAT/
MOUNTED  EXTERN  N      N         512   4096  1048576      9088     8362                0            8362              0  SATMAT/


8.) mkalias : Use the mkalias command to create ASM aliases for a given ASM file name. This command is equivalent to the alter diskgroup add alias command from the SQL prompt when connected to an ASM instance. Here is an example of using the mkalias to create an alias for an ASM file:


ASMCMD [+RATCAT/NOIDA/DATAFILE] > mkalias EXAMPLE.270.756485973 exam
ASMCMD [+RATCAT/NOIDA/DATAFILE] > ls
EXAMPLE.265.755871581
EXAMPLE.270.756485973
SYSAUX.257.755871363
SYSAUX.269.756485947
SYSTEM.256.755871363
SYSTEM.263.756485911
UNDOTBS1.258.755871365
UNDOTBS1.271.756485979
USERS.259.755871365
USERS.272.756485983
exam         ---------  alias
                                         Hence we find the aliases name as "exam" 


9.) mkdir :   Mkdir allows you to create ASM directories. we can not create a directory in the root directory (+), and we can create multiple directories at one time by simply listing them one at a time on the mkdir command line separated by spaces. This command is roughly equivalent to the alter diskgroup add directory command. Here are a couple of examples of using the mkdir command:


ASMCMD [+] > mkdir +RATCAT/asm


Here we can crosscheck it by using  "ls"  command


ASMCMD [+] > ls +RATCAT
NOIDA/
aa/
asm/           ----- new directory created
spfileTEST.ora
sysaux.f
test/
test1/


ASMCMD [+] > mkdir +RATCAT/asm1 +RATCAT/asm2 +RATCAT/asm3


Again we can crosscheck it by using "ls" command


ASMCMD [+] > ls +RATCAT/
NOIDA/
aa/
asm/
asm1/            ----->new directory created
asm2/            ----->
asm3/            ----->
spfileTEST.ora
sysaux.f
test/
test1/


10.)  pwd :   Use the pwd command to determine where in the ASM directory tree we  are. Here is an example:


ASMCMD [+RATCAT/NOIDA/DATAFILE] > pwd
+RATCAT/NOIDA/DATAFILE


In case of non-interactive mode, this is more helpful


ASMCMD> pwd
+RATCAT/NOIDA/DATAFILE


11.)  rm  :   The rm command is used to remove ASM directories or files. The rm command will only remove a directory if that directory is empty. This is equivalent to the alter diskgroup drop directory command. Here is an example of removing a directory with the rm command:


ASMCMD> ls
NOIDA/
aa/
asm/
asm1/
asm2/
asm3/
spfileTEST.ora
sysaux.f
test/
test1/


ASMCMD> rm asm


ASMCMD> ls
NOIDA/
aa/
asm1/
asm2/
asm3/
spfileTEST.ora
sysaux.f
test/
test1/


The rm command is also used to remove ASM files and associated aliases. This is equivalent to the alter diskgroup drop file command. 


we can use the "*" wildcard character with the rm command too. If we use the wildcard character, Oracle will warn us to make sure that we truly want to remove the file. Here is an example


ASMCMD> rm asm*
You may delete multiple files and/or directories.
Are you sure? (y/n) y


ASMCMD> ls
NOIDA/
aa/
spfileTEST.ora
sysaux.f
test/
test1/

12.) rmalias  :  The rmalias command is used to remove ASM aliases .


ASMCMD [+RATCAT/NOIDA/DATAFILE] > ls
EXAMPLE.265.755871581
EXAMPLE.270.756485973
SYSAUX.257.755871363
SYSAUX.269.756485947
SYSTEM.256.755871363
SYSTEM.263.756485911
UNDOTBS1.258.755871365
UNDOTBS1.271.756485979
USERS.259.755871365
USERS.272.756485983
exam


ASMCMD [+RATCAT/NOIDA/DATAFILE] > rmalias exam
ASMCMD [+RATCAT/NOIDA/DATAFILE] > ls
EXAMPLE.265.755871581
EXAMPLE.270.756485973
SYSAUX.257.755871363
SYSAUX.269.756485947
SYSTEM.256.755871363
SYSTEM.263.756485911
UNDOTBS1.258.755871365
UNDOTBS1.271.756485979
USERS.259.755871365
USERS.272.756485983


There are further more new features and commands are added in ASM oracle 11g.I will cover it later on.


Enjoy   :-)