Wednesday, June 15, 2011

SQL*Plus COPY Command


The SQL*Plus COPY command can copy data between two databases via SQL*Net .  The preferred method of doing this is to use SQL*Plus on the host where the database resides.  If performing the copy command from a client SQL*Net connection, the data is transferred through the client machine.

The copy command copies data from one Oracle instance to another.   The data is simply copied directly from a source to a target.  The format of the copy command is:

SQL> copy from  <source database> TO <target database>  destination_table (column_name, column_name...) USING query

The action can include:
  • Create      If the destination table already exists, copy will report an error, otherwise the table is created and the data is copied.
  • Replace   –  If the destination table exists, copy will drop and recreate the table with the newly copied data. Otherwise, it will create the table and populate it with the data.
  • Insert     –  If the destination table exists, copy inserts the new rows into the table. Otherwise, copy reports an error and aborts.
  • Append     Inserts the data into the table if it exists, otherwise it will create the table and then insert the data.   

 SQL> copy from scott/tiger@test1  to scott/tiger@orcl create new_emp using select * from emp;

Here "test1" is remote database and "orcl" is target database where the table new_emp is created.

Once the command above is executed, the copy utility displays the values of three parameters, each of which can be set with the SQL*Plus set command.  The arraysize specifies the number of rows that SQL*Plus will retrieve from the database at one time.  The copy commit parameter specifies how often a commit is performed and is related to the number of trips – one trip is the number of rows defined in arraysize.  Finally, the long parameter displays the maximum number of characters copied for each column with a LONG datatype.

Array fetch/bind size is 15. (arraysize is 15)
Will commit when done. (copycommit is 0)
Maximum long size is 80. (long is 80)
Table NEW_EMP created.

   14 rows selected from scott@test1
   14 rows inserted into NEW_EMP.
   14 rows committed into NEW_EMP at scott@orcl

SQL> desc new_emp;
  Name                                                    Null?                      Type
 ----------------------------                     ---------------               -------------------
 EMPNO                                           NOT NULL                 NUMBER(4)
 ENAME                                                                              VARCHAR2(10)
 JOB                                                                                   VARCHAR2(9)
 MGR                                                                                  NUMBER(4)
 HIREDATE                                                                        DATE
 SAL                                                                                   NUMBER(7,2)
 COMM                                                                               NUMBER(7,2)
 DEPTNO                                                                           NUMBER(2)

The command above did not specify column names for the new table (new_emp).  As a result, the new table will have the same column names as the table being copied.  If different column names are required, they can be specified after the table name:

create new_emp (col1, col2, …)

However, if one column name is specified, they all must be specified.

A DBA could perform this same function with a database link from one database pointing to another.  The appeal of the copy command is that it only requires SQL*Net service names and proper privileges to get the job done.  For those environments that restrict the usage of database links, the copy utility can be leveraged.  In addition, the copy command provides many options, as defined by the actions create, replace, insert and append.

If the copy command is executed from a client PC to copy data from remote database DB0 to remote database DB1, the data will be copied from DB0 to the client PC and then to DB1.  For this reason, it is best to use SQL*Plus from either remote host and not require the data to travel through a client machine in order to reach its final destination.

The following command copied the table_with_one_million_rows table to new_table:

SQL> copy from scott/tiger@DB0  to scott/tiger@DB1 create new_table using select * from table_with_one_million_rows;

Array fetch/bind size is 5000. (arraysize is 5000)
Will commit after every 5000 array binds. (copycommit is 5000)
Maximum long size is 80. (long is 80)
Table NEW_TABLE created.

   1000000 rows selected from scott@DB0.
   1000000 rows inserted into NEW_TABLE.
   1000000 rows committed into NEW_TABLE at scott@DB1.

 The copy command becomes handy when we have limited (SELECT) access on the source database or we do not have OS level access for the server hosting the source database.

  
Enjoy     :-)

  

New initialization parameters and Obsolete parameter in Oracle 11gR1


The best way to find new parameters is to create a database link between the two versions and run a set query using MINUS.Here are the new initialization parameter which was introduce in oracle 11gr1(11.1.0.6).

asm_preferred_read_failure_groups
client_result_cache_lag
client_result_cache_size
commit_logging
commit_wait
control_management_pack_access
db_lost_write_protect
db_securefile
db_ultra_safe
ddl_lock_timeout
diagnostic_dest
enable_ddl_logging
global_txn_processes
java_jit_enabled
ldap_directory_sysauth
memory_max_target
memory_target
optimizer_capture_sql_plan_baselines
optimizer_use_invisible_indexes
optimizer_use_pending_statistics
optimizer_use_sql_plan_baselines
parallel_io_cap_enabled
plscope_settings
redo_transport_user
resource_manager_cpu_allocation
result_cache_max_result
result_cache_max_size
result_cache_mode
result_cache_remote_expiration
sec_case_sensitive_logon
sec_max_failed_login_attempts
sec_protocol_error_further_action
sec_protocol_error_trace_action
sec_return_server_release_banner
xml_db_events

The list of obsolete parameter in oracle 11gr1(11.1.0.6) are :

_dlm_send_timeout
_fast_start_instance_recovery_target
_lm_rcv_buffer_size
_log_archive_buffer_size
ddl_wait_for_locks
logmnr_max_persistent_sessions
plsql_compiler_flags
remote_archive_enable


Enjoy    :-)


Tuesday, June 14, 2011

32-bit Oracle Database server is *not* certified on Windows x64

Once while Monitering alert logfile, i found that the below error is occuring at every 10 sec. The error is as : 
Tue Jun 07 16:19:43 2011
OER 7451 in Load Indicator : Error Code = OSD-04500: illegal option specified
O/S-Error: (OS 1) Incorrect function. !
OER 7451 in Load Indicator : Error Code = OSD-04500: illegal option specified
O/S-Error: (OS 1) Incorrect function. !
Tue Jun 07 16:19:53 2011
OER 7451 in Load Indicator : Error Code = OSD-04500: illegal option specified
O/S-Error: (OS 1) Incorrect function. !
OER 7451 in Load Indicator : Error Code = OSD-04500: illegal option specified
O/S-Error: (OS 1) Incorrect function. !
Tue Jun 07 16:20:03 2011

After googling, I find that Oracle Support states that “Running 32-bit applications on a 64-bit operating system is generally referred to as “Compatibility-Mode” by many AMD64/EM64T hardware vendors. Irrespective of this terminology, 32-bit Oracle Database server is *not* certified on Windows x64. For the 32-bit Oracle database client, check the certification matrix for more details.”

Actually the Oracle database software installs without an issue.  It is when I created the actual database that I encountered the same repeated errors in my alert.log after startup. On searching Meta-link points to Doc. ID 1060806.1 which states the cause and solution as below

Cause  :     Installed 32-bit Oracle database software on a 64-bit MS Windows OS which is not supported.

Note: For the Database software, you can ONLY install the x64 version on MS Windows (x64).  You can NOT install the 32-bit version Database software on MS Windows (x64).

Solution  :   Install 32-bit Oracle database software only on 32-bit MS Windows OS.


Enjoy    :-)


Handling Corrupt Datafile Blocks in RMAN Backup

We have two different kinds of block corruption:
Physical corruption (media corrupt) : Physical corruption can be caused by defected memory boards, controllers or broken sectors on a hard disk.
Logical corruption (soft corrupt) : Logical corrution can among other reasons be caused by an attempt to recover through a NOLOGGING action.

When RMAN encounters a corrupt datafile block during a backup, the behavior depends upon whether RMAN has encountered this corrupt block during a previous backup. If the block is already identified as corrupt, then it is included in the backup. If the block is not previously identified as corrupt, then RMAN's default behavior is to stop the backup. We can override this behavior using the SET MAXCORRUPT command with BACKUP in a RUN block. Setting MAXCORRUPT allows a specified number of previously  undetected block corruptions in datafiles during the execution of an RMAN BACKUP command. Here is the example of set maxcorrupt example.
Syntax  :  set maxcorrupt  for datafile <dataFileSpec>  TO  <integer>

Example :
i.) RMAN>run  {
                             set maxcorrupt for datafile 3,4,5,6  to 1 ; 
                              backup check logical database ; 
                           }
In the above example datafile 3,4,5,6 may not more than 1 corruption datafile block otherwise backup will fail.
ii.)RMAN> run {
                                set maxcorrupt for datafile 1 to 10;
                                backup database;
                                skip inaccessible;
                                skip readonly
                          }

If RMAN detects more than this number of new corrupt blocks while taking the backup, then the backup job aborts, and no backup is created. As RMAN finds corrupt blocks during the backup process, it writes the corrupt blocks to the backup with a special header indicating that the block has media corruption. If the backup completes without exceeding the specified MAXCORRUPT limit, then the database records the address of the corrupt blocks and the type of corruption found (logical or physical) in the control file. We can access these records through the V$DATABASE_BLOCK_CORRUPTION view.

Detecting Physical Block Corruption With RMAN BACKUP : RMAN checks only for physically corrupt blocks with every backup it takes and every image copy it makes. RMAN depends upon database server sessions to perform backups, and the database server can detect many types of physically corrupt blocks during the backup process. Each new corrupt block not previously encountered in a backup is recorded in the control file and in the alert.log. By default, error checking for physical corruption is enabled. At the end of a backup, RMAN stores the corruption information in the recovery catalog and control file.

How to detect block corruption ? 

1.)  DBVERIFY utility   :  DBVERIFY is an external command-line utility that performs a physical data structure integrity check. It can be used on offline or online databases, as well on backup files. we use DBVERIFY primarily when we need to ensure that a backup database (or datafile) is valid before it is restored
2.) Block checking parameters  : There are two initialization parameters for dealing with block corruption :                            
  • DB_BOCK_CHECKSUM (calculates a checksum for each block before it is written to disk, every time) causes 1-2% performance overhead .
  • DB_BLOCK_CHECKING (server process checks block for internal consistency after every DML) causes 1-10% performance overhead .
For more about db_block_checking parameter click here

3.) ANALYZE TABLE  :  ANALYZE TABLE tablename VALIDATE STRUCTURE CASCADE SQL  statement  Validate the structure of an index or index partition, table or table partition, index-organized table, cluster, or object reference (REF) . More about Analyze:Report Corruption click here

4.) RMAN BACKUP command with  VALIDATE option   :   We can use the VALIDATE option of the BACKUP command to verify that database files exist and are in the correct locations, and have no physical or logical corruptions that would prevent RMAN from creating backups of them. When performing a BACKUP... VALIDATE, RMAN reads the files to be backed up in their entirety, as it would during a real backup. It does not, however, actually produce any backup sets or image copies.

RMAN> RESTORE DATABASE VALIDATE;
RMAN> RESTORE ARCHIVELOG ALL VALIDATE;

To check for logical corruptions in addition to physical corruptions, run the following variation of the preceding command:
RMAN> BACKUP VALIDATE 
              CHECK LOGICAL  DATABASE 
              ARCHIVELOG ALL;

Detection of Logical Block Corruption :  Besides testing for media corruption, the database can also test data and index blocks for logical corruption, such as corruption of a row piece or index entry. If RMAN finds logical corruption, then it logs the block in the alert.log. If CHECK LOGICAL was used,the block is also logged in the server session trace file. By default, error checking for logical corruption is disabled.

1.) RMAN found any block corruption in database then following Data Dictionary view populated.
V$COPY_CORRUPTION
V$BACKUP_CORRUPTION
V$DATABASE_BLOCK_CORRUPTION

2.)  EXPORT/IMPORT command line utility
Full database EXPORT/IMPORT show=y is another method.
. about to export SCOTT's tables via Conventional Path ...
. . exporting table BONUS
EXP-00056: ORACLE error 1578 encountered
ORA-01578: ORACLE data block corrupted (file # 4, block # 43)
ORA-01110: data file 4: 'D:\app\Neerajs\oradata\orcl\USERS01.DBF'

3.) DBMS_REPAIR package
dbms_repair is a utility that can detect and repair block corruption within Oracle. It is provided by Oracle as part of the standard database installation.For detail about dbms_repair Package click here .


Enjoy     :-)


Friday, June 10, 2011

Rman Change Command in Oracle


Rman Change Comamnd : Change command update the status of a backup or copy in the RMAN repository.The purpose of the Change command is to change the status of backups, copies, and archived logs in the repository to AVAILABLE or UNAVAILABLE. This feature is useful when a previously unavailable file is made available again, or we do not want a specific backup or copy to be eligible to be restored but also do not want to delete it.

The CHANGE command can alter the repository status of usable backups and copies from prior incarnations.It  removes the catalog records for backups and copies, and update the corresponding records in the target control file to status DELETED. This feature is useful when we remove a file by using an operating system command rather than the RMAN CHANGE command, and want to remove its repository record as well.

Restrictions and Usage Notes :
  • The target instance must be started.
  • The KEEP FOREVER clause requires use of a recovery catalog.
  • we cannot use CHANGE... UNAVAILABLE or KEEP attributes for files stored in the flash recovery area.

The only CHANGE command that requires either a manual or automatic maintenance channel is the CHANGE ... AVAILABLE command. However, a maintenance channel is not required when CHANGE ... AVAILABLE is used with a file that is disk only (that is, an archivelog, datafile copy, or controlfilecopy). Here are the option used with the Change Command :

AVAILABLE  :  Changes the status of a backup or copy to AVAILABLE in the repository. View the status in the LIST output or recovery catalog views.

keepOption   :  Changes the exemption status of a backup or copy in relation to the configured retention policy. For example, specify CHANGE ... NOKEEP to make a backup that is currently exempt from the retention policy eligible for OBSOLETE status.
Note: we cannot use this option with flash recovery area files.

UNAVAILABLE  :   Changes the status of a backup or copy to UNAVAILABLE in the repository. View the status in the LIST output or recovery catalog views. This option is provided for cases when the file cannot be found or has migrated offsite. RMAN does not use a file that is marked UNAVAILABLE in a RESTORE or RECOVER command. If the file is later found or returns to the main site, then use the AVAILABLE option to update its status.

UNCATALOG  :    Removes references to a datafile copy, backup piece, or archived redo log from the recovery catalog, and updates records in the target control file to status DELETED. The CHANGE ... UNCATALOG command does not touch physical backups and copies. Use this command to notify RMAN when a file is deleted by some means other than a DELETE command.
Caution: If we resynchronize from a backup control file, or upgrade the recovery catalog, then uncataloged records can sometimes reappear in the catalog metadata.

Here are few Examples : 

1.) UNAVAILABLE option   :  This example changes the status of backup set 100 as well as all backups of server parameter files created more than a day ago to UNAVAILABLE:

RMAN > CHANGE BACKUPSET 100 UNAVAILABLE;
RMAN > CHANGE BACKUP OF SPFILE COMPLETED BEFORE 'SYSDATE-1' UNAVAILABLE;

2.) Uncataloging and Cataloging Archived Logs  :  In this example, we move all archived logs to a new directory, uncatalog them, and then recatalog them in the new location:

RMAN > HOST move 'D:\archive\'  'D:\stand_arch\' ;
RMAN > CHANGE ARCHIVELOG ALL UNCATALOG;
RMAN > CATALOG START WITH 'D:\stand_arch\';

3.) Changing the Retention Status of a Backupset : This example, which requires a recovery catalog, changes an ordinary backup into a long-term backup:

RMAN > CHANGE BACKUP TAG 'consistent_db_bkup' KEEP FOREVER NOLOGS;


Enjoy      :-)