Saturday, April 30, 2011

Format for LOG_ARCHIVE_FORMAT in Oracle


LOG_ARCHIVE_FORMAT parameter controls the format of the archive log file name. This parameter can only be used when the redo log is in ARCHIVELOG mode . LOG_ARCHIVE_FORMAT parameter is static in nature so it comes into action after restart of the instance. If the format defined in the parameter log_archive_format is invalid the database will startup but archiver will fail to archive logs which will cause database to hang and in the alert log the following message would be reported .
 “ORA-00294: invalid archivelog format specifier..” 
So if we change this parameter a quick test can be done by running the above SQL to make sure oracle is able to archive the redo log.
The format of specifying the archive redo log filename is given below :

LOG_ARCHIVE_FORMAT = “LOG%s_%t_%r.arc”
Various parameters that can be used with the LOG_ARCHIVE_FORMAT parameter are given below:
%s – log sequence number
%S – log sequence number, padded with zero
%t – thread number
%T – thread number, padded with zero
%a – activation id
%d – database id
%r – reset logs id

Whenever uppercase is used for a variable, such as %S or %T, it forces the value of the variable to be of fixed length, and the value to the left is padded with zeros.

Below is the Demo of the log_archive_format parameters.

SQL> alter system set log_archive_dest_1='location=D:\archive\';
System altered.

SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            D:\archive\
Oldest online log sequence     11
Next log sequence to archive   13
Current log sequence           13
        
SQL> alter system set log_archive_format='noida_%s_%t_%r.arc' scope=spfile;
System altered.

SQL> shut immediate
SQL>startup

SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            D:\archive\
Oldest online log sequence     11
Next log sequence to archive   13
Current log sequence           13

SQL> alter system switch logfile;
System altered.

Now the  new archive log file  name  like  'NOIDA_13_1_749730106.ARC'

For more click here 


Enjoy       :-)

Difference Between LOG_ARCHIVE_DEST and LOG_ARCHIVE_DEST_1

Once  my friend call me and told that he has confusion about the  LOG_ARCHIVE_DEST and LOG_ARCHIVE_DEST_n . I explain him and decided to post it here .

LOG_ARCHIVE_DEST is applicable only if we are running the database in ARCHIVELOG mode. LOG_ARCHIVE_DEST parameter are used to specified the archiving location. The Location specified by log_archive_dest must be local . We choose to archive only two local location  i.e,  primary and a secondary destination ( using LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST ) .

The  LOG_ARCHIVE_DEST_n initialization parameter defines up to ten (where n = 1, 2, ... 10) destinations in oracle 10g and thirty one (n=1,2....31)  destination in oracle 11g , each of which must specify either the  LOCATION or  the SERVICE  attribute to specify where to archive the redo data. All other attributes are optional.We set the attributes for the LOG_ARCHIVE_DEST_n initialization parameter to control different aspects of how redo transport services transfer redo data from a production or primary database destination to another (standby) database destination.For every LOG_ARCHIVE_DEST_n initialization parameter that we define, we must specify a corresponding LOG_ARCHIVE_DEST_STATE_n parameter. The LOG_ARCHIVE_DEST_STATE_n (where n is an integer from 1 to 10) initialization parameter specifies whether the corresponding destination is currently enabled or disabled.

LOG_ARCHIVE_DEST is incompatible with the LOG_ARCHIVE_DEST_n parameters, and must be defined as the null string ("") or (' ') when any LOG_ARCHIVE_DEST_n parameter has a value other than a null string.

If we are using Oracle Enterprise Edition, LOG_ARCHIVE_DEST parameter is deprecated in favor of the LOG_ARCHIVE_DEST_n parameters. If Oracle Enterprise Edition is not installed or it is installed but we have not specified any LOG_ARCHIVE_DEST_n parameters,then LOG_ARCHIVE_DEST parameter is valid. If we have LOG_ARCHIVE_DEST parameter set and we have to set more than two archive location then follow the below steps :

SQL> alter system reset log_archive_dest='' scope=both;
SQL> shutdown immediate
SQL> startup
SQL> alter system set log_archive_dest_1='location=C:\archive\' ;
SQL> alter system set log_archive_dest_2='location=D:\oracle\archive\' ;



Enjoy      : -)


Control File Contents

A control file contains information about the associated database that is required for access by an instance, both at startup and during normal operation. Control file information can be modified only by Oracle Database; no database administrator or user can edit a control file. It contains (but is not limited to) the following types of information :
  • Database information (RESETLOGS SCN and their time stamp)
  • Archive log history
  • Tablespace and datafile records (filenames, datafile checkpoints, read/write status, offline or not).
  • Redo threads (current online redo log)
  • Database's creation date
  • database name
  • current archive log mode
  • Log records (sequence numbers, SCN range in each log)
  • RMAN catalog
  • Database block corruption information
  • Database ID, which is unique to each DB.                                                                                                                                                                                                                                                                                              
The location of the control files is specified through the control_files init parameter file .


Size of control files :
The size of the control files is governed by the following parameters
  • maxlogfiles
  • maxlogmembers
  • maxloghistory
  • maxinstances
  • control_file_record_keep_time
Sections :
The controlfile contains the following sections:
Archives Log (reusable)
Backup Corruption (reusable)
Backup Datafile (reusable)
Backup Piece (reusable)
Backup Redolog (reusable)
Backuo Set (reusable)
Backup spfile
CKPT Process
Copy Corruption (reusable)
Datafile
Datafile Copy (reusable)
Datafile History
Database Incarnation
Deleted Objects (reusable)
Filename
Flashback Log
Instance Space Reservation
Log History (reusable)
MTTR
Offline Range (reusable)
Recovery Destination
Removable Recovery Files
Rman Status
Rman Configuration
Redo Threads
Redo Logs
Tablespace
Temporary Filename
Thread Instance Name Mapping
Proxy Copy

The  minimum  number of  days that  a  reusable  record  is kept  in  the  controlfile is controlled  by  the control_file_record_keep_time parameter. These sections consist of records. The size, total number  and number of used record are exposed through v$controlfile_record_section.

To Check the information inside controlfile  use the below command  :
SQL> alter database backup controlfile to trace as   'C:\CREATE_CONTROL.sql' ;




Enjoy      :-)

Friday, April 29, 2011

What is RBA ?

An  RBA (Redo Block Address) points  to  a  specific phyical  location  within a redo logfile . The "tail of the log" is the RBA of the most recent redo entry written to the redo log file . It is ten bytes long and has  three  components .

the log file sequence number  ( 4 bytes)
the log file block number       ( 4 bytes)
the byte offset into the block at which the redo record starts (2 bytes)

For Example :  RBA [0x775.2.10]  maps to Log squence , Block number with byte offset .

There are different types of RBA available in SGA , the following are :

Low RBA : Dirty buffer contains first redo change address called Low RBA. From x$bh we can check low RBA.

High RBA : Dirty buffer contains last and most recent redo changes address called High RBA. From x$bh we can check High RBA.

Checkpoint  RBA : DBWR  has written  buffers from  checkpoint queue  are pointing  to  checkpoint  RBA while  incremental checkpoint  is  enabled. This  RBA copies  in  to  control  file’s checkpoint  progress record. When instance recovery occurs that time it starts from checkpointing  RBA from control  file. We  can check this RBA from x$targetrba (sometimes from x$kccrt).

On-disk RBA : That RBA which was flushed in to online Redo Log File on disk. This RBA recorded in to control file record  section. We can check from x$kcccp for on-disk RBA (sometimes from x$targetrba).

How RBA comes in Pictures :
CKPT records checkpoint information to controlfile for maintaining book keeping information like checkpoint  progress . Each instance checkpoint  refers  to some  RBA (called checkpoint RBA) whose  redo prior to this RBA have been written to disk. Hence recovery time is difference between checkpoint RBA and end of the redo log  .

Given a  checkpoint RBA, DBWR writes  buffers  from  the  head  of  the queue  until  low RBA of the buffer at  the head of the checkpoint queue  is greater  than  the checkpoint  RBA . At  this  point ,CKPT can  record  this checkpoint  progress  record  in  control file  (phase 3).
PHASE(1)  process  initiating  the checkpoint (checkpoiting  RBA or current RBA is marked) (The RBA of the last change made to a buffer) at the time reuqest is initiated.
PHASE (2)  DBWR  writes all  required  buffers  i.e  all  buffers  that  have  been modified at RBAs less than or equal to the checkpoint RBA. After all required buffers have been written, in
PHASE (3)  CKPT process records the completion of the checkpoint in control file.

The checkpoint  RBA  is copied  into  the  checkpoint  progress  record  of  the  controlfile by the checkpoint  heartbeat  once  every  3  seconds. Instance recovery, when needed, begins from the checkpoint  RBA  recorded  in  the  controlfile. The  target  RBA is the point up to which DBWn should seek to advance the checkpoint RBA to satisfy instance recovery objectives.

The term sync RBA is sometimes used to refer to the point up to which LGWR is required to sync the thread. However, this is not a full RBA -- only a redo block number is used at this point.


Click Here for Reference


What is codd’s rule ?

Dr. E.F. Codd, an IBM researcher, first developed the relational data model in 1970. Over time it has proved to be flexible, extensible, and robust. In 1985 Codd published a list of 12 rules known as "Codd's 12 Rules" that defined how a true RDBMS should be evaluated. Understanding these rules will greatly improve the ability to understand RDBMS's in general, including Oracle. Also note that these rules are "guidelines" because, to date, no commercial relational database system fully conforms to all 12 rules .


It is important to remember that RDBMS is not a product, it is a method of design. Database products like the Oracle Database, SQL Server, DB2, Microsoft Access, etc. all adhere to the relational model of database design. The rules are as follows :

1. )  The Information Rule:  For a database to be relational all information is represented as data values  .


2.)  The Rule of Guaranteed Access:  The data represented in table by using table name, column name and primary key value defined for that table.

3.) The systematic treatment of null values:  If a information is not present then they are represented as null values in database. But it is vital to note that primary key values cannot be not null and also nul values are different from spaces or zeroes.

4.) The database Description Rule: The description f database is also maintained in a place called as data dictionary and users can access this if they have proper authority or privilege to do the same.

5.) The comprehensive data sublanguage rule : Database must support the following namely  Data definition . 
  •  View definition
  •  Data manipulation
  • Integrity constraints
  • Authorization
6.)  The view updating rule : All views that are updatable by theory can also be updated by the system.

7.) The insert and update rule: Data manipulation commands like insert, update, delete must be operational on multiple rows rather than on single row.

8.) The physical independence rule: The database access by users must be independent of changes in storage representation or access methods to data.

9.) The logical data independence rule : The end user application programs or other activities must be independent or must be unaffected when there is a change to the design of the database.

10.) Integrity independence rule : The constraints namely the integrity constraints defined should also be stored in database as data in tables.

11.) The distribution of portions of the database to various locations should be invisible to users of the database. Existing applications should continue to operate successfully :
  • when a distributed version of the DBMS is first introduced; and
  • when existing distributed data are redistributed around the system.

12.) No subversion rule : If an RDBMS supports a lower level language then it should not bypass any integrity constraints defined in the higher level.




Enjoy        :-)
 

Configuring Kernel Parameters For Oracle 10g Installation


This section documents the checks and modifications to the Linux kernel that should be made by the DBA to support Oracle Database 10g. Before detailing these individual kernel parameters, it is important to fully understand the key kernel components that are used to support the Oracle Database environment.

The kernel parameters and shell limits presented in this section are recommended values only as documented by Oracle. For production database systems, Oracle recommends that we tune these values to optimize the performance of the system.

Verify that the kernel parameters shown in this section are set to values greater than or equal to the recommended values.

Shared Memory  :  Shared memory allows processes to access common structures and data by placing them in a shared memory segment. This is the fastest form of  Inter-Process Communications (IPC) available - mainly due to the fact that no kernel involvement occurs when data is being passed between the processes. Data does not need to be copied between processes .

Oracle makes use of shared memory for its Shared Global Area (SGA) which is an area of memory that is shared by all Oracle backup and foreground processes. Adequate sizing of the SGA is critical to Oracle performance since it is responsible for holding the database buffer cache, shared SQL, access paths, and so much more.

To determine all current shared memory limits, use the following :

# ipcs  -lm
------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 4194303
max total shared memory (kbytes) = 1073741824
min seg size (bytes) = 1 

The following list describes the kernel parameters that can be used to change the shared memory configuration for the server:

1.) shmmax  -   Defines the maximum size (in bytes) for a shared memory segment. The Oracle SGA is comprised of shared memory and it is possible that incorrectly setting shmmax  could limit the size of the SGA. When setting shmmax, keep in mind that the size of the SGA should fit within one shared memory segment. An inadequate shmmax setting could result in the following:
ORA-27123: unable to attach to shared memory segment

We can determine the value of shmmax by performing the following :

# cat /proc/sys/kernel/shmmax
4294967295

For most Linux systems, the default value for shmmax is 32MB. This size is often too small to configure the Oracle SGA. The default value for shmmax in CentOS 5 is 4GB which is more than enough for the Oracle configuration. Note that this value of 4GB is not the "normal" default value for shmmax in a Linux environment  inserts the following two entries in the file /etc/sysctl.conf:

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 4294967295

2.) shmmni   :   This kernel parameter is used to set the maximum number of shared memory segments system wide. The default value for this parameter is 4096. This value is sufficient and typically does not need to be changed. We can determine the value of shmmni by performing the following:

# cat /proc/sys/kernel/shmmni 
4096

3.) shmall   :   This parameter controls the total amount of shared memory (in pages) that can be used at one time on the system. The value of this parameter should always be at least: We can determine the value of shmall by performing the following :

# cat /proc/sys/kernel/shmall 
268435456

For most Linux systems, the default value for shmall is 2097152 and is adequate for most configurations. The default value for shmall in CentOS 5 is 268435456 (see above) which is more than enough for the Oracle configuration described in this article. Note that this value of 268435456 is not the "normal" default value for shmall in a Linux environment , inserts the following two entries in the file /etc/sysctl.conf:

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 268435456

4.) shmmin :  This parameter controls the minimum size (in bytes) for a shared memory segment. The default value for shmmin is 1 and is adequate for the Oracle configuration described in this article.We  can determine the value of shmmin by performing the following:

# ipcs -lm | grep "min seg size"
min seg size (bytes) = 1

Semaphores 
After the DBA has configured the shared memory settings, it is time to take care of configuring the semaphores. The best way to describe a semaphore is as a counter that is used to provide synchronization between processes (or threads within a process) for shared resources like shared memory. Semaphore sets are supported in System V where each one is a counting semaphore. When an application requests semaphores, it does so using "sets". To determine all current semaphore limits, use the following:

#  ipcs -ls
------ Semaphore Limits --------
max number of arrays = 128
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767

We  can also use the following command:
# cat /proc/sys/kernel/sem
250     32000   32      128

The following list describes the kernel parameters that can be used to change the semaphore configuration for the server:

i.) semmsl - This kernel parameter is used to control the maximum number of semaphores per semaphore set. Oracle recommends setting semmsl to the largest PROCESS instance parameter setting in the init.ora file for all databases on the Linux system plus 10. Also, Oracle recommends setting the semmsl to a value of no less than 100.

ii.) semmni - This kernel parameter is used to control the maximum number of semaphore sets in the entire Linux system. Oracle recommends setting semmni to a value of no less than 100.

iii.) semmns - This kernel parameter is used to control the maximum number of semaphores (not semaphore sets) in the entire Linux system. Oracle recommends setting the semmns to the sum of the PROCESSES instance parameter setting for each database on the system, adding the largest PROCESSES twice, and then finally adding 10 for each Oracle database on the system. Use the following calculation to determine the maximum number of semaphores that can be allocated on a Linux system. It will be the lesser of:
SEMMNS  -or-  (SEMMSL * SEMMNI)

iv.) semopm - This kernel parameter is used to control the number of semaphore operations that can be performed per semop system call. The semop system call (function) provides the ability to do operations for multiple semaphores with one semop system call. A semaphore set can have the maximum number of semmslsemaphores per semaphore set and is therefore recommended to set semopm equal to semmsl in some situations. Oracle recommends setting the semopm to a value of no less than 100.

File Handles :
When configuring the Linux server, it is critical to ensure that the maximum number of file handles is large enough. The setting for file handles denotes the number of open files that you can have on the Linux system. Use the following command to determine the maximum number of file handles for the entire system:

# cat /proc/sys/fs/file-max 
102312

Oracle recommends that the file handles for the entire system be set to at least 65536.  We can query the current usage of file handles by using the following :

# cat /proc/sys/fs/file-nr
3072    0       102312

The file-nr file displays three parameters:
•        Total allocated file handles
•        Currently used file handles
•        Maximum file handles that can be allocated

If we need to increase the value in /proc/sys/fs/file-max, then make sure that the ulimit is set properly. Usually for Linux 2.4 and 2.6 it is set to unlimited. Verify theulimit setting my issuing the ulimit command 

# ulimit
unlimited

IP Local Port Range  :
Oracle strongly recommends to set the local port range ip_local_port_range for outgoing messages to "1024 65000" which is needed for systems with high-usage. This kernel parameter defines the local port range for TCP and UDP traffic to choose from.
The default value for ip_local_port_range is ports 32768 through 61000 which is inadequate for a successful Oracle configuration. Use the following command to determine the value of ip_local_port_range:

# cat /proc/sys/net/ipv4/ip_local_port_range
32768   61000

Networking Settings  : 
With Oracle 9.2.0.1 and later, Oracle makes use of UDP as the default protocol on Linux for inter-process communication (IPC), such as Cache Fusion and Cluster Manager buffer transfers between instances within the RAC cluster.

Oracle strongly suggests to adjust the default and maximum receive buffer size (SO_RCVBUF socket option) to 1MB and the default and maximum send buffer size (SO_SNDBUF socket option) to 256KB.The receive buffers are used by TCP and UDP to hold received data until it is read by the application. The receive buffer cannot overflow because the peer is not allowed to send data beyond the buffer size window.

This means that datagrams will be discarded if they don't fit in the socket receive buffer, potentially causing the sender to overwhelm the receiver. Use the following commands to determine the current buffer size (in bytes) of each of the IPC networking parameters:

# cat /proc/sys/net/core/rmem_default
109568

# cat /proc/sys/net/core/rmem_max
131071

# cat /proc/sys/net/core/wmem_default
109568

# cat /proc/sys/net/core/wmem_max
131071

Setting Kernel Parameters for Oracle
If the value of any kernel parameter is different to the recommended value, they will need to be modified. For this article, I identified and provide the following values that will need to be added to the /etc/sysctl.conf file which is used during the boot process.

kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.shmall = 2097152
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 1048576
net.core.rmem_max = 1048576
net.core.wmem_default = 262144
net.core.wmem_max = 262144

After adding the above lines to the /etc/sysctl.conf file, they persist each time the system reboots. If we would like to make these kernel parameter value changes to the current system without having to first reboot, enter the following command:

# /sbin/sysctl –p



Enjoy     :-) 

Thursday, April 28, 2011

What is Oracle Streams ?


Oracle Streams enables information sharing. Using Oracle Streams, each unit of shared information is called a message, and we can share these messages in a stream. The stream can propagate information within a database or from one database to another. The stream  routes specified information to specified destinations. The result is a feature that provides greater functionality and flexibility than traditional solutions for capturing and managing messages, and sharing the messages with other databases and applications. Streams provide the capabilities needed to build and operate distributed enterprises and applications, data warehouses, and high availability solutions.

Using Oracle Streams, we control what information is put into a stream, how the stream flows or is routed from database to database, what happens to messages in the stream as they flow into each database, and how the stream terminates. Based on our specifications, Streams can capture, stage, and manage messages in the database automatically, including, but not limited to, data manipulation language (DML) changes and data definition language (DDL) changes. we can also put user-defined messages into a stream, and Streams can propagate the information to other databases or applications automatically. When messages reach a destination, Streams can consume them based on our specifications.
.

                                        Oracle  Streams information flow
What Can Streams Do
The following sections provide an overview of what Streams can do.

1.) Capture Messages at a Database   :     A capture process can capture database events, such as changes made to tables, schemas, or an entire database. Such changes are recorded in the redo log for a database, and a capture process captures changes from the redo log and formats each captured change into a message called a logical change record (LCR). The rules used by a capture process determine which changes it captures, and these captured changes are called captured messages.
The database where changes are generated in the redo log is called the source database. A capture process can capture changes locally at the source database, or it can capture changes remotely at a downstream database. A capture process enqueues logical change records (LCRs) into a queue that is associated with it.

2.) Stage Messages in a Queue    :    Messages are stored (or staged) in a queue. These messages can be captured messages or user-enqueued messages. A capture process enqueues messages into a ANYDATA queue. An ANYDATA queue can stage messages of different types. Users and applications can enqueue messages into an ANYDATA queue or into a typed queue.

3.) Propagate Messages from One Queue to Another   :    Streams propagations can propagate messages from one queue to another. These queues can be in the same database or in different databases. Rules determine which messages are propagated by a propagation.

4.) Consume Messages    :    A message is consumed when it is dequeued from a queue. An apply process can dequeue messages from a queue implicitly. A user, application, or messaging client can dequeue messages explicitly. The database where messages are consumed is called the destination database.

Rules determine which messages are dequeued and processed by an apply process. An apply process can apply messages directly to database objects or pass messages to custom PL/SQL subprograms for processing.
Rules determine which messages are dequeued by a messaging client. A messaging client dequeues messages when it is invoked by an application or a user.

What Are the Uses of Streams?
The following sections briefly describe some of the reasons for using Streams. In some cases, Streams components provide infrastructure for various features of Oracle.

1.) Data Replication  :    Streams can capture DML and DDL changes made to database objects and replicate those changes to one or more other databases . A Streams capture process captures changes made to source database objects and formats them into LCRs, which can be propagated to destination databases and then applied by Streams apply processes.

The destination databases can allow DML and DDL changes to the same database objects, and these changes might or might not be propagated to the other databases in the environment. In other words, we can configure a Streams environment with one database that propagates changes, or we can configure an environment where changes are propagated between databases bidirectionally. Also, the tables for which data is shared do not need to be identical copies at all databases. Both the structure and the contents of these tables can differ at different databases, and the information.

2.) Event Management and Notification   :    Business events are valuable communications between applications or organizations. An application can enqueue messages that represent events into a queue explicitly, or a Streams capture process can capture database events and encapsulate them into messages called LCRs. These captured messages can be the results of DML or DDL changes. Propagations can propagate messages in a stream through multiple queues. Finally, a user application can dequeue messages explicitly, or a Streams apply process can dequeue messages implicitly. An apply process can reenqueue these messages explicitly into the same queue or a different queue if necessary. 

3.) Data Warehouse Loading  :  Data warehouse loading is a special case of data replication. Some of the most critical tasks in creating and maintaining a data warehouse include refreshing existing data, and adding new data from the operational databases. Streams components can capture changes made to a production system and send those changes to a staging database or directly to a data warehouse or operational data store. Streams capture of redo data avoids unnecessary overhead on the production systems. Support for data transformations and user-defined apply procedures enables the necessary flexibility to reformat data or update warehouse-specific data fields as data is loaded. 

4.)  Data Protection    :     One solution for data protection is to create a local or remote copy of a production database. In the event of human error or a catastrophe, the copy can be used to resume processing. we can use Streams to configure flexible high availability environments.
In addition, we can use Oracle Data Guard, a data protection feature that uses some of the same infrastructure as Streams, to create and maintain a logical standby database, which is a logically equivalent standby copy of a production database.

5.)  Database Availability During Upgrade and Maintenance Operations  :  we can use the features of Oracle Streams to achieve little or no database down time during database upgrade and maintenance operations. Maintenance operations include migrating a database to a different platform, migrating a database to a different character set, modifying database schema objects to support upgrades to user-created applications, and applying an Oracle software patch.

Below is an example of  Oracle Streams  Sharing Information Between Databases





















Enjoy        J J J

Monday, April 25, 2011

ORACLE 10G INSTALLATION ON REDHAT 5.1 LINUX

This is a step by step Oracle Installation , which will helps in  installation of Oracle 10g on RHEL 5.1 (Red Hat Enterprise Linux 5.1)  32-bit architecture.

Step  1 :  The first thing we need to verify is  the hardware for an Oracle 10g Installation.

  Ø           Check Physical RAM.
                # grep  MemTotal   /proc/meminfo
                   MemTotal :      2075424 kB
 We need at least 1024 MB of physical RAM.  In my case I have 2048 MB.

Ø           Check Swap Space.
              # grep  SwapTotal    /proc/meminfo
                      SwapTotal:     3148732 kB
The following is Oracle's requirement for swap space:
Available RAMSwap Space Required
Between 1 GB and 2 GB1.5 or 2  times the size of RAM
Between 2 GB and 8 GBEqual to the size of RAM
More than 8 GB.75 times the size of RAM
Ø       Check space available in /tmp
           # df  -h  /tmp
              Filesystem            Size     Used         Avail     Use%     Mounted on
              /dev/sda5             1.5G     35M         1.4G      3% /        tmp
We need to have at least 400 MB of space in the /tmp directory.  Make sure we have 400MB in the column "Avail" in the above output.
  In my case I have 1.4G space available in /tmp.

Ø    Check space for Oracle Software and pre-configured database.
 I have created a separate partition "/u01" for Oracle Software  and database files.
             # df -h /u01
               Filesystem            Size        Used        Avail    Use%   Mounted on
               /dev/sda2              15G     166M         14G       2%        /u01

I  have 14G available space on the partition /u01.  Space requirement for Oracle Software:
     Enterprise Edition         2.5G
     Standard Edition           2.3G
     Custom (maximum)      3G

Once all hardware requirements are verified, we will proceed with further configuration.Make sure that there is an entry in /etc/hosts file for your machine like this:
[IP-address]   [fully-qualified-machine-name]   [machine-name]
Where "fully-qualified-machine-name" is your  "machine_name"."domain_name"

Step 2 :  Before start installing Oracle10g software, make sure that we have the below packages installed on our Linux box,  else we will get error(s) during the installation process.

glibc-2.3.4-2.13
glibc-common-2.3.4-2.13
gcc-3.4.4-2
gcc-c++-3.4.4-2
libstdc++-3.4.4-2
libstdc++-devel-3.4.4-2
libaio-0.3.103-3
binutils-2.15.92.0.2-15
make-3.80-5
compat-db-4.1.25-9
gnome-libs-1.4.1.2.90-44.1
pdksh-5.2.14-30.3
sysstat-5.0.5-1
Execute the below comand as root to make sure that we have this rpms installed. If not installed, then download them  from appropriate linux site. For example ,
# rpm -qa  | grep glib*
The above command will display all the installed packages, name starting with  glib,  similarly we can check for all others packages .

If any of the above packages are  not installed, run the following command:
# rpm -ivh </path/to/><version>.i386.rpm
we will find the  package from the Red Hat Enterprise Linux 5 DVD .

Step 3 : Create user for Installation  :
We need  OS  “oracle”  user account created which owns the Oracle software. Oracle Software installation needs to be proceeds by this account for installing and managing. We do not need oinstall group if we are not installing on the production server or we (or our team) are the only person responsible for future installations/upgrades of Oracle Software. Members of the dba group can administer Oracle databases, for example starting up and shutting down databases. The oinstall group is often needed for those organizations who have separate groups that maintain the software and the database. In this scenario, the group membership prevents unauthorized access to the database by personnel who maintain the software, and it prevents the database administrators from making changes to the software and the installations inventory directory.

 # su -
#  groupadd dba
# groupadd oinstall
# useradd -g oinstall -G dba oracle
# passwd oracle
<<Enter password>>

Step 4:    Setting System parameters
Kernel Parameters : Oracle recommends that we set shared memory segment attributes as well as semaphores to the following values. If not set, database instance creation will fail.. Every OS process needs semaphore where It waits on for the resources. I added the following lines to /etc/sysctl.conf file. For more about the kernel parameter click here
Add/change the appropriate variables value in the   /etc/sysctl.conf  file as shown below.

kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144
After adding these lines to /etc/sysctl.conf, run the below command as root to make them enabled.
 #  sysctl  -p

Step 4: Edit the /etc/pam.d/login file and add following line
session required pam_limits.so

Step 5:   Edit the /etc/security/limits.conf file
To improve the performance of the software on Linux systems, we must increase the following shell limits  for the oracle user .Add the following lines to the /etc/security/limits.conf file:
oracle    soft  nproc  2047
oracle    hard  nproc  16384
oracle    soft  nofile  1024
oracle    hard  nofile  65536
Where "nproc" is the maximum number of processes available to the user and "nofiles" is the number of open file descriptors.

Step 6 : Creating oracle directories
As per OFA, oracle base directory has the path  : /mount_point/app/oracle_sw_owner where mount_point  is the mount point directory for the file system that will contain the Oracle software . I have used  /u01 for the mount point directory. However, we could choose another mount point directory,  such as /oracle or /opt/soft.

# mkdir  -p /u01/app/oracle
# chown  -R  oracle:oinstall  /u01/app/oracle
#mkdir  -p   /u02/oradata/db102
# chown  -R  oracle:oinstall /u02/oradata/db102
# chmod  -R  775  /u01/app/oracle
# chmod  -R  775  /u02/oradata/db102

Step  7:   Setting Oracle Enviroment
Edit the /home/oracle/.bash_profile  file and add following lines:
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
# If /tmp doesn't have 400mb space free then you can workaround it
# by pointing the variables TMP AND TMPDIR to a location where you
# have sufficient space. Oracle will then use this directory for
# temporary files.
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1; export ORACLE_HOME
ORACLE_SID=orcl; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
PATH=/usr/sbin:$PATH; export PATH
PATH=$ORACLE_HOME/bin:$PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib;
export CLASSPATH
if [ $USER = "oracle" ]; then
      if [ $SHELL = "/bin/ksh" ]; then
                ulimit -p 16384
                ulimit -n 65536
      else
                ulimit -u 16384 -n 65536
      fi
fi

Step 8:   Release specification 
Oracle 10g is not certified to be installed on RHEL 5, therefore, it runs a check on the operating system when the installer starts and will bounce back with an error if the redhat release is not redhat-4 or redhat-3 etc. Open /etc/redhat-release, remove whatever release is specified over there and put this:  redhat-4.

Step 9:   Installation Process of Oracle 10g Software
#  xhost +     (access control disabled, clients can connect from any host)
# mount  /media/cdrom    (--- if u have your s/w in cd else no need of this steps ) 
# su - oracle
$ echo $ORACLE_BASE
/u01/app/oracle
$ echo $ORACLE_SID
orcl
$ ./media/cdrom/runInstaller
The OUI (Oracle Universal Installer) should start and we should see graphical screens and install as per requirements.

Note:  Disable secure linux by editing the /etc/selinux/config file, making sure the SELINUX flag is set as follows:  SELINUX=disabled . If we  leave SELINUX=enforcing then we may get an error later while starting sqlplus:



Enjoy         : -)