Thursday, June 16, 2011

Automatic Workload Repository (AWR) in Oracle


Oracle have provided many performance gathering and reporting tools over the years. Originally the UTLBSTAT/UTLESTAT scripts were used to monitor performance metrics. Oracle8i introduced the Statspack functionality which Oracle9i extended. In Oracle 10g statspack has evolved into the Automatic Workload Repository (AWR).The AWR is a repository of performance information collected by the database to aid in the tuning process for DBAs.

Oracle 10g uses a scheduled job, GATHER_STATS_JOB, to collect AWR statistics. This job is created, and enabled automatically when we create a new Oracle database. We can disable and enable the schedule job by following command:

we can disable this job by using the dbms_scheduler.disable procedure as below :

Exec dbms_scheduler.disable(’GATHER_STATS_JOB’);

And we can enable the job using the dbms_scheduler.enable procedure as below :

Exec dbms_scheduler.enable(’GATHER_STATS_JOB’);

AWR consists of a collection of performance statistics including :
  • Wait events used to identify performance problems.
  • Time model statistics indicating the amount of DB time associated with a process from the v$sess_time_model and v$sys_time_model views.
  • Active Session History (ASH) statistics from the v$active_session_history view.
  • Some system and session statistics from the v$sysstat and v$sesstat views.
  • Object usage statistics.
  • Resource intensive SQL and PL/SQL.

The resource intensive SQL and PL/SQL section of the report can be used to focus tuning efforts on those areas that will yield the greatest returns.  The statements are ordered by several criteria including :
  • SQL ordered by Elapsed Time
  • SQL ordered by CPU Time
  • SQL ordered by Gets
  • SQL ordered by Reads
  • SQL ordered by Executions
  • SQL ordered by Parse Calls
  • SQL ordered by Sharable Memory

Several of the automatic database tuning features require information from the AWR to function correctly, including:
  • Automatic Database Diagnostic Monitor
  • SQL Tuning Advisor
  • Undo Advisor
  • Segment Advisor

 How to generate AWR report ?  

There are two scripts that are provided by oracle to generate the AWR report. The scripts are available in the directory  $ORACLE_HOME\rdbms\admin. The two scripts are 

1.) awrrpt.sql   : If we have only One Oracle Database then run awrrpt.sql sql script.

2.) awrrpti.sql  : If we have more than One Oracle Instance (Like RAC) then run awrrpti.sql script so that we can particular instance for awr report creation.

By default snapshots of the relevant data are taken every hour and retained for 7 days. The default values for these settings can be altered using the below procedure :

BEGIN
  DBMS_WORKLOAD_REPOSITORY.modify_snapshot_settings(
    retention => 43200,        -- Minutes (= 30 Days). Current value retained if NULL.
    interval  => 15);          -- Minutes. Current value retained if NULL.
END;
/
                  Here we have  alter the snapshot interval to 15min. It is recommended that 15 Minutes is enough in two snapshot for better performance bottleneck.

AWR using Enterprise Manager    :   The automated workload repository administration tasks have been included in Enterprise Manager. The "Automatic Workload Repository" page is accessed from the main page by clicking on the "Administration" link, then the "Workload Repository" link under the "Workload" section. The page allows us to modify AWR settings or manage snapshots without using the PL/SQL APIs.

Here is the Demo of the AWR report .

C:\>sqlplus sys/xxxx@orcl as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Thu Jun 16 11:42:19 2011
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> @D:\app\Neerajs\product\11.2.0\dbhome_1\RDBMS\ADMIN\awrrpt.sql

Current Instance
~~~~~~~~~~~~~~~~
   DB Id                       DB Name        Inst Num          Instance
-----------                      ------------           --------          ------------
 1281052636                ORCL                1                    orcl
Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
Would you like an HTML report, or a plain text report?
Enter 'html' for an HTML report, or 'text' for plain text
Defaults to 'html'
Enter value for report_type: HTML

Type Specified:  html
Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   DB Id                        Inst Num       DB Name        Instance           Host
------------                      --------           ------------        ------------        ------------
* 1281052636                1                   ORCL              orcl                xxxx

Using 1281052636 for database Id
Using          1 for instance number

Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed.  Pressing <return> without
specifying a number lists all completed snapshots.
Enter value for num_days:       (Press Enter to see all the snapshots)

Listing all Completed Snapshots
                                                       
Instance     DB Name        Snap Id       Snap Started               Level
---------       ------------         ---------         ------------------              -----
orcl             ORCL                 1             08 Jun 2011 11:30          1
                                               3             08 Jun 2011 14:41         1
                                               4             08 Jun 2011 15:30         1
                                 .
 Data is truncated
         .            
                                                120           16 Jun 2011 05:30       1
                                                121           16 Jun 2011 06:30       1
                                                122            16 Jun 2011 07:30      1
                                                123            16 Jun 2011 08:30      1
                                                124            16 Jun 2011 09:30      1
                                                125            16 Jun 2011 10:30      1
                                                126            16 Jun 2011 11:30      1

Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 125
Begin Snapshot Id specified: 125

Enter value for end_snap: 126
End   Snapshot Id specified: 126
Specify the Report Name
~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is awrrpt_1_125_126.html.  To use this name,
press <return> to continue, otherwise enter an alternative.

Enter value for report_name:          (Press enter if you want to use the above name)
Using the report name awrrpt_1_125_126.html
.
.
Report is truncated 
.
.
End of Report
</body></html>
Report written to awrrpt_1_125_126.html
SQL>

In the above report the line which are shaded with red colour are the entered values when it prompts.


Enjoy     :-)


Wednesday, June 15, 2011

SQL*Plus COPY Command


The SQL*Plus COPY command can copy data between two databases via SQL*Net .  The preferred method of doing this is to use SQL*Plus on the host where the database resides.  If performing the copy command from a client SQL*Net connection, the data is transferred through the client machine.

The copy command copies data from one Oracle instance to another.   The data is simply copied directly from a source to a target.  The format of the copy command is:

SQL> copy from  <source database> TO <target database>  destination_table (column_name, column_name...) USING query

The action can include:
  • Create      If the destination table already exists, copy will report an error, otherwise the table is created and the data is copied.
  • Replace   –  If the destination table exists, copy will drop and recreate the table with the newly copied data. Otherwise, it will create the table and populate it with the data.
  • Insert     –  If the destination table exists, copy inserts the new rows into the table. Otherwise, copy reports an error and aborts.
  • Append     Inserts the data into the table if it exists, otherwise it will create the table and then insert the data.   

 SQL> copy from scott/tiger@test1  to scott/tiger@orcl create new_emp using select * from emp;

Here "test1" is remote database and "orcl" is target database where the table new_emp is created.

Once the command above is executed, the copy utility displays the values of three parameters, each of which can be set with the SQL*Plus set command.  The arraysize specifies the number of rows that SQL*Plus will retrieve from the database at one time.  The copy commit parameter specifies how often a commit is performed and is related to the number of trips – one trip is the number of rows defined in arraysize.  Finally, the long parameter displays the maximum number of characters copied for each column with a LONG datatype.

Array fetch/bind size is 15. (arraysize is 15)
Will commit when done. (copycommit is 0)
Maximum long size is 80. (long is 80)
Table NEW_EMP created.

   14 rows selected from scott@test1
   14 rows inserted into NEW_EMP.
   14 rows committed into NEW_EMP at scott@orcl

SQL> desc new_emp;
  Name                                                    Null?                      Type
 ----------------------------                     ---------------               -------------------
 EMPNO                                           NOT NULL                 NUMBER(4)
 ENAME                                                                              VARCHAR2(10)
 JOB                                                                                   VARCHAR2(9)
 MGR                                                                                  NUMBER(4)
 HIREDATE                                                                        DATE
 SAL                                                                                   NUMBER(7,2)
 COMM                                                                               NUMBER(7,2)
 DEPTNO                                                                           NUMBER(2)

The command above did not specify column names for the new table (new_emp).  As a result, the new table will have the same column names as the table being copied.  If different column names are required, they can be specified after the table name:

create new_emp (col1, col2, …)

However, if one column name is specified, they all must be specified.

A DBA could perform this same function with a database link from one database pointing to another.  The appeal of the copy command is that it only requires SQL*Net service names and proper privileges to get the job done.  For those environments that restrict the usage of database links, the copy utility can be leveraged.  In addition, the copy command provides many options, as defined by the actions create, replace, insert and append.

If the copy command is executed from a client PC to copy data from remote database DB0 to remote database DB1, the data will be copied from DB0 to the client PC and then to DB1.  For this reason, it is best to use SQL*Plus from either remote host and not require the data to travel through a client machine in order to reach its final destination.

The following command copied the table_with_one_million_rows table to new_table:

SQL> copy from scott/tiger@DB0  to scott/tiger@DB1 create new_table using select * from table_with_one_million_rows;

Array fetch/bind size is 5000. (arraysize is 5000)
Will commit after every 5000 array binds. (copycommit is 5000)
Maximum long size is 80. (long is 80)
Table NEW_TABLE created.

   1000000 rows selected from scott@DB0.
   1000000 rows inserted into NEW_TABLE.
   1000000 rows committed into NEW_TABLE at scott@DB1.

 The copy command becomes handy when we have limited (SELECT) access on the source database or we do not have OS level access for the server hosting the source database.

  
Enjoy     :-)

  

New initialization parameters and Obsolete parameter in Oracle 11gR1


The best way to find new parameters is to create a database link between the two versions and run a set query using MINUS.Here are the new initialization parameter which was introduce in oracle 11gr1(11.1.0.6).

asm_preferred_read_failure_groups
client_result_cache_lag
client_result_cache_size
commit_logging
commit_wait
control_management_pack_access
db_lost_write_protect
db_securefile
db_ultra_safe
ddl_lock_timeout
diagnostic_dest
enable_ddl_logging
global_txn_processes
java_jit_enabled
ldap_directory_sysauth
memory_max_target
memory_target
optimizer_capture_sql_plan_baselines
optimizer_use_invisible_indexes
optimizer_use_pending_statistics
optimizer_use_sql_plan_baselines
parallel_io_cap_enabled
plscope_settings
redo_transport_user
resource_manager_cpu_allocation
result_cache_max_result
result_cache_max_size
result_cache_mode
result_cache_remote_expiration
sec_case_sensitive_logon
sec_max_failed_login_attempts
sec_protocol_error_further_action
sec_protocol_error_trace_action
sec_return_server_release_banner
xml_db_events

The list of obsolete parameter in oracle 11gr1(11.1.0.6) are :

_dlm_send_timeout
_fast_start_instance_recovery_target
_lm_rcv_buffer_size
_log_archive_buffer_size
ddl_wait_for_locks
logmnr_max_persistent_sessions
plsql_compiler_flags
remote_archive_enable


Enjoy    :-)


Tuesday, June 14, 2011

32-bit Oracle Database server is *not* certified on Windows x64

Once while Monitering alert logfile, i found that the below error is occuring at every 10 sec. The error is as : 
Tue Jun 07 16:19:43 2011
OER 7451 in Load Indicator : Error Code = OSD-04500: illegal option specified
O/S-Error: (OS 1) Incorrect function. !
OER 7451 in Load Indicator : Error Code = OSD-04500: illegal option specified
O/S-Error: (OS 1) Incorrect function. !
Tue Jun 07 16:19:53 2011
OER 7451 in Load Indicator : Error Code = OSD-04500: illegal option specified
O/S-Error: (OS 1) Incorrect function. !
OER 7451 in Load Indicator : Error Code = OSD-04500: illegal option specified
O/S-Error: (OS 1) Incorrect function. !
Tue Jun 07 16:20:03 2011

After googling, I find that Oracle Support states that “Running 32-bit applications on a 64-bit operating system is generally referred to as “Compatibility-Mode” by many AMD64/EM64T hardware vendors. Irrespective of this terminology, 32-bit Oracle Database server is *not* certified on Windows x64. For the 32-bit Oracle database client, check the certification matrix for more details.”

Actually the Oracle database software installs without an issue.  It is when I created the actual database that I encountered the same repeated errors in my alert.log after startup. On searching Meta-link points to Doc. ID 1060806.1 which states the cause and solution as below

Cause  :     Installed 32-bit Oracle database software on a 64-bit MS Windows OS which is not supported.

Note: For the Database software, you can ONLY install the x64 version on MS Windows (x64).  You can NOT install the 32-bit version Database software on MS Windows (x64).

Solution  :   Install 32-bit Oracle database software only on 32-bit MS Windows OS.


Enjoy    :-)


Handling Corrupt Datafile Blocks in RMAN Backup

We have two different kinds of block corruption:
Physical corruption (media corrupt) : Physical corruption can be caused by defected memory boards, controllers or broken sectors on a hard disk.
Logical corruption (soft corrupt) : Logical corrution can among other reasons be caused by an attempt to recover through a NOLOGGING action.

When RMAN encounters a corrupt datafile block during a backup, the behavior depends upon whether RMAN has encountered this corrupt block during a previous backup. If the block is already identified as corrupt, then it is included in the backup. If the block is not previously identified as corrupt, then RMAN's default behavior is to stop the backup. We can override this behavior using the SET MAXCORRUPT command with BACKUP in a RUN block. Setting MAXCORRUPT allows a specified number of previously  undetected block corruptions in datafiles during the execution of an RMAN BACKUP command. Here is the example of set maxcorrupt example.
Syntax  :  set maxcorrupt  for datafile <dataFileSpec>  TO  <integer>

Example :
i.) RMAN>run  {
                             set maxcorrupt for datafile 3,4,5,6  to 1 ; 
                              backup check logical database ; 
                           }
In the above example datafile 3,4,5,6 may not more than 1 corruption datafile block otherwise backup will fail.
ii.)RMAN> run {
                                set maxcorrupt for datafile 1 to 10;
                                backup database;
                                skip inaccessible;
                                skip readonly
                          }

If RMAN detects more than this number of new corrupt blocks while taking the backup, then the backup job aborts, and no backup is created. As RMAN finds corrupt blocks during the backup process, it writes the corrupt blocks to the backup with a special header indicating that the block has media corruption. If the backup completes without exceeding the specified MAXCORRUPT limit, then the database records the address of the corrupt blocks and the type of corruption found (logical or physical) in the control file. We can access these records through the V$DATABASE_BLOCK_CORRUPTION view.

Detecting Physical Block Corruption With RMAN BACKUP : RMAN checks only for physically corrupt blocks with every backup it takes and every image copy it makes. RMAN depends upon database server sessions to perform backups, and the database server can detect many types of physically corrupt blocks during the backup process. Each new corrupt block not previously encountered in a backup is recorded in the control file and in the alert.log. By default, error checking for physical corruption is enabled. At the end of a backup, RMAN stores the corruption information in the recovery catalog and control file.

How to detect block corruption ? 

1.)  DBVERIFY utility   :  DBVERIFY is an external command-line utility that performs a physical data structure integrity check. It can be used on offline or online databases, as well on backup files. we use DBVERIFY primarily when we need to ensure that a backup database (or datafile) is valid before it is restored
2.) Block checking parameters  : There are two initialization parameters for dealing with block corruption :                            
  • DB_BOCK_CHECKSUM (calculates a checksum for each block before it is written to disk, every time) causes 1-2% performance overhead .
  • DB_BLOCK_CHECKING (server process checks block for internal consistency after every DML) causes 1-10% performance overhead .
For more about db_block_checking parameter click here

3.) ANALYZE TABLE  :  ANALYZE TABLE tablename VALIDATE STRUCTURE CASCADE SQL  statement  Validate the structure of an index or index partition, table or table partition, index-organized table, cluster, or object reference (REF) . More about Analyze:Report Corruption click here

4.) RMAN BACKUP command with  VALIDATE option   :   We can use the VALIDATE option of the BACKUP command to verify that database files exist and are in the correct locations, and have no physical or logical corruptions that would prevent RMAN from creating backups of them. When performing a BACKUP... VALIDATE, RMAN reads the files to be backed up in their entirety, as it would during a real backup. It does not, however, actually produce any backup sets or image copies.

RMAN> RESTORE DATABASE VALIDATE;
RMAN> RESTORE ARCHIVELOG ALL VALIDATE;

To check for logical corruptions in addition to physical corruptions, run the following variation of the preceding command:
RMAN> BACKUP VALIDATE 
              CHECK LOGICAL  DATABASE 
              ARCHIVELOG ALL;

Detection of Logical Block Corruption :  Besides testing for media corruption, the database can also test data and index blocks for logical corruption, such as corruption of a row piece or index entry. If RMAN finds logical corruption, then it logs the block in the alert.log. If CHECK LOGICAL was used,the block is also logged in the server session trace file. By default, error checking for logical corruption is disabled.

1.) RMAN found any block corruption in database then following Data Dictionary view populated.
V$COPY_CORRUPTION
V$BACKUP_CORRUPTION
V$DATABASE_BLOCK_CORRUPTION

2.)  EXPORT/IMPORT command line utility
Full database EXPORT/IMPORT show=y is another method.
. about to export SCOTT's tables via Conventional Path ...
. . exporting table BONUS
EXP-00056: ORACLE error 1578 encountered
ORA-01578: ORACLE data block corrupted (file # 4, block # 43)
ORA-01110: data file 4: 'D:\app\Neerajs\oradata\orcl\USERS01.DBF'

3.) DBMS_REPAIR package
dbms_repair is a utility that can detect and repair block corruption within Oracle. It is provided by Oracle as part of the standard database installation.For detail about dbms_repair Package click here .


Enjoy     :-)