Saturday, April 30, 2011

Control File Contents

A control file contains information about the associated database that is required for access by an instance, both at startup and during normal operation. Control file information can be modified only by Oracle Database; no database administrator or user can edit a control file. It contains (but is not limited to) the following types of information :
  • Database information (RESETLOGS SCN and their time stamp)
  • Archive log history
  • Tablespace and datafile records (filenames, datafile checkpoints, read/write status, offline or not).
  • Redo threads (current online redo log)
  • Database's creation date
  • database name
  • current archive log mode
  • Log records (sequence numbers, SCN range in each log)
  • RMAN catalog
  • Database block corruption information
  • Database ID, which is unique to each DB.                                                                                                                                                                                                                                                                                              
The location of the control files is specified through the control_files init parameter file .


Size of control files :
The size of the control files is governed by the following parameters
  • maxlogfiles
  • maxlogmembers
  • maxloghistory
  • maxinstances
  • control_file_record_keep_time
Sections :
The controlfile contains the following sections:
Archives Log (reusable)
Backup Corruption (reusable)
Backup Datafile (reusable)
Backup Piece (reusable)
Backup Redolog (reusable)
Backuo Set (reusable)
Backup spfile
CKPT Process
Copy Corruption (reusable)
Datafile
Datafile Copy (reusable)
Datafile History
Database Incarnation
Deleted Objects (reusable)
Filename
Flashback Log
Instance Space Reservation
Log History (reusable)
MTTR
Offline Range (reusable)
Recovery Destination
Removable Recovery Files
Rman Status
Rman Configuration
Redo Threads
Redo Logs
Tablespace
Temporary Filename
Thread Instance Name Mapping
Proxy Copy

The  minimum  number of  days that  a  reusable  record  is kept  in  the  controlfile is controlled  by  the control_file_record_keep_time parameter. These sections consist of records. The size, total number  and number of used record are exposed through v$controlfile_record_section.

To Check the information inside controlfile  use the below command  :
SQL> alter database backup controlfile to trace as   'C:\CREATE_CONTROL.sql' ;




Enjoy      :-)

Friday, April 29, 2011

What is RBA ?

An  RBA (Redo Block Address) points  to  a  specific phyical  location  within a redo logfile . The "tail of the log" is the RBA of the most recent redo entry written to the redo log file . It is ten bytes long and has  three  components .

the log file sequence number  ( 4 bytes)
the log file block number       ( 4 bytes)
the byte offset into the block at which the redo record starts (2 bytes)

For Example :  RBA [0x775.2.10]  maps to Log squence , Block number with byte offset .

There are different types of RBA available in SGA , the following are :

Low RBA : Dirty buffer contains first redo change address called Low RBA. From x$bh we can check low RBA.

High RBA : Dirty buffer contains last and most recent redo changes address called High RBA. From x$bh we can check High RBA.

Checkpoint  RBA : DBWR  has written  buffers from  checkpoint queue  are pointing  to  checkpoint  RBA while  incremental checkpoint  is  enabled. This  RBA copies  in  to  control  file’s checkpoint  progress record. When instance recovery occurs that time it starts from checkpointing  RBA from control  file. We  can check this RBA from x$targetrba (sometimes from x$kccrt).

On-disk RBA : That RBA which was flushed in to online Redo Log File on disk. This RBA recorded in to control file record  section. We can check from x$kcccp for on-disk RBA (sometimes from x$targetrba).

How RBA comes in Pictures :
CKPT records checkpoint information to controlfile for maintaining book keeping information like checkpoint  progress . Each instance checkpoint  refers  to some  RBA (called checkpoint RBA) whose  redo prior to this RBA have been written to disk. Hence recovery time is difference between checkpoint RBA and end of the redo log  .

Given a  checkpoint RBA, DBWR writes  buffers  from  the  head  of  the queue  until  low RBA of the buffer at  the head of the checkpoint queue  is greater  than  the checkpoint  RBA . At  this  point ,CKPT can  record  this checkpoint  progress  record  in  control file  (phase 3).
PHASE(1)  process  initiating  the checkpoint (checkpoiting  RBA or current RBA is marked) (The RBA of the last change made to a buffer) at the time reuqest is initiated.
PHASE (2)  DBWR  writes all  required  buffers  i.e  all  buffers  that  have  been modified at RBAs less than or equal to the checkpoint RBA. After all required buffers have been written, in
PHASE (3)  CKPT process records the completion of the checkpoint in control file.

The checkpoint  RBA  is copied  into  the  checkpoint  progress  record  of  the  controlfile by the checkpoint  heartbeat  once  every  3  seconds. Instance recovery, when needed, begins from the checkpoint  RBA  recorded  in  the  controlfile. The  target  RBA is the point up to which DBWn should seek to advance the checkpoint RBA to satisfy instance recovery objectives.

The term sync RBA is sometimes used to refer to the point up to which LGWR is required to sync the thread. However, this is not a full RBA -- only a redo block number is used at this point.


Click Here for Reference


What is codd’s rule ?

Dr. E.F. Codd, an IBM researcher, first developed the relational data model in 1970. Over time it has proved to be flexible, extensible, and robust. In 1985 Codd published a list of 12 rules known as "Codd's 12 Rules" that defined how a true RDBMS should be evaluated. Understanding these rules will greatly improve the ability to understand RDBMS's in general, including Oracle. Also note that these rules are "guidelines" because, to date, no commercial relational database system fully conforms to all 12 rules .


It is important to remember that RDBMS is not a product, it is a method of design. Database products like the Oracle Database, SQL Server, DB2, Microsoft Access, etc. all adhere to the relational model of database design. The rules are as follows :

1. )  The Information Rule:  For a database to be relational all information is represented as data values  .


2.)  The Rule of Guaranteed Access:  The data represented in table by using table name, column name and primary key value defined for that table.

3.) The systematic treatment of null values:  If a information is not present then they are represented as null values in database. But it is vital to note that primary key values cannot be not null and also nul values are different from spaces or zeroes.

4.) The database Description Rule: The description f database is also maintained in a place called as data dictionary and users can access this if they have proper authority or privilege to do the same.

5.) The comprehensive data sublanguage rule : Database must support the following namely  Data definition . 
  •  View definition
  •  Data manipulation
  • Integrity constraints
  • Authorization
6.)  The view updating rule : All views that are updatable by theory can also be updated by the system.

7.) The insert and update rule: Data manipulation commands like insert, update, delete must be operational on multiple rows rather than on single row.

8.) The physical independence rule: The database access by users must be independent of changes in storage representation or access methods to data.

9.) The logical data independence rule : The end user application programs or other activities must be independent or must be unaffected when there is a change to the design of the database.

10.) Integrity independence rule : The constraints namely the integrity constraints defined should also be stored in database as data in tables.

11.) The distribution of portions of the database to various locations should be invisible to users of the database. Existing applications should continue to operate successfully :
  • when a distributed version of the DBMS is first introduced; and
  • when existing distributed data are redistributed around the system.

12.) No subversion rule : If an RDBMS supports a lower level language then it should not bypass any integrity constraints defined in the higher level.




Enjoy        :-)
 

Configuring Kernel Parameters For Oracle 10g Installation


This section documents the checks and modifications to the Linux kernel that should be made by the DBA to support Oracle Database 10g. Before detailing these individual kernel parameters, it is important to fully understand the key kernel components that are used to support the Oracle Database environment.

The kernel parameters and shell limits presented in this section are recommended values only as documented by Oracle. For production database systems, Oracle recommends that we tune these values to optimize the performance of the system.

Verify that the kernel parameters shown in this section are set to values greater than or equal to the recommended values.

Shared Memory  :  Shared memory allows processes to access common structures and data by placing them in a shared memory segment. This is the fastest form of  Inter-Process Communications (IPC) available - mainly due to the fact that no kernel involvement occurs when data is being passed between the processes. Data does not need to be copied between processes .

Oracle makes use of shared memory for its Shared Global Area (SGA) which is an area of memory that is shared by all Oracle backup and foreground processes. Adequate sizing of the SGA is critical to Oracle performance since it is responsible for holding the database buffer cache, shared SQL, access paths, and so much more.

To determine all current shared memory limits, use the following :

# ipcs  -lm
------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 4194303
max total shared memory (kbytes) = 1073741824
min seg size (bytes) = 1 

The following list describes the kernel parameters that can be used to change the shared memory configuration for the server:

1.) shmmax  -   Defines the maximum size (in bytes) for a shared memory segment. The Oracle SGA is comprised of shared memory and it is possible that incorrectly setting shmmax  could limit the size of the SGA. When setting shmmax, keep in mind that the size of the SGA should fit within one shared memory segment. An inadequate shmmax setting could result in the following:
ORA-27123: unable to attach to shared memory segment

We can determine the value of shmmax by performing the following :

# cat /proc/sys/kernel/shmmax
4294967295

For most Linux systems, the default value for shmmax is 32MB. This size is often too small to configure the Oracle SGA. The default value for shmmax in CentOS 5 is 4GB which is more than enough for the Oracle configuration. Note that this value of 4GB is not the "normal" default value for shmmax in a Linux environment  inserts the following two entries in the file /etc/sysctl.conf:

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 4294967295

2.) shmmni   :   This kernel parameter is used to set the maximum number of shared memory segments system wide. The default value for this parameter is 4096. This value is sufficient and typically does not need to be changed. We can determine the value of shmmni by performing the following:

# cat /proc/sys/kernel/shmmni 
4096

3.) shmall   :   This parameter controls the total amount of shared memory (in pages) that can be used at one time on the system. The value of this parameter should always be at least: We can determine the value of shmall by performing the following :

# cat /proc/sys/kernel/shmall 
268435456

For most Linux systems, the default value for shmall is 2097152 and is adequate for most configurations. The default value for shmall in CentOS 5 is 268435456 (see above) which is more than enough for the Oracle configuration described in this article. Note that this value of 268435456 is not the "normal" default value for shmall in a Linux environment , inserts the following two entries in the file /etc/sysctl.conf:

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 268435456

4.) shmmin :  This parameter controls the minimum size (in bytes) for a shared memory segment. The default value for shmmin is 1 and is adequate for the Oracle configuration described in this article.We  can determine the value of shmmin by performing the following:

# ipcs -lm | grep "min seg size"
min seg size (bytes) = 1

Semaphores 
After the DBA has configured the shared memory settings, it is time to take care of configuring the semaphores. The best way to describe a semaphore is as a counter that is used to provide synchronization between processes (or threads within a process) for shared resources like shared memory. Semaphore sets are supported in System V where each one is a counting semaphore. When an application requests semaphores, it does so using "sets". To determine all current semaphore limits, use the following:

#  ipcs -ls
------ Semaphore Limits --------
max number of arrays = 128
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767

We  can also use the following command:
# cat /proc/sys/kernel/sem
250     32000   32      128

The following list describes the kernel parameters that can be used to change the semaphore configuration for the server:

i.) semmsl - This kernel parameter is used to control the maximum number of semaphores per semaphore set. Oracle recommends setting semmsl to the largest PROCESS instance parameter setting in the init.ora file for all databases on the Linux system plus 10. Also, Oracle recommends setting the semmsl to a value of no less than 100.

ii.) semmni - This kernel parameter is used to control the maximum number of semaphore sets in the entire Linux system. Oracle recommends setting semmni to a value of no less than 100.

iii.) semmns - This kernel parameter is used to control the maximum number of semaphores (not semaphore sets) in the entire Linux system. Oracle recommends setting the semmns to the sum of the PROCESSES instance parameter setting for each database on the system, adding the largest PROCESSES twice, and then finally adding 10 for each Oracle database on the system. Use the following calculation to determine the maximum number of semaphores that can be allocated on a Linux system. It will be the lesser of:
SEMMNS  -or-  (SEMMSL * SEMMNI)

iv.) semopm - This kernel parameter is used to control the number of semaphore operations that can be performed per semop system call. The semop system call (function) provides the ability to do operations for multiple semaphores with one semop system call. A semaphore set can have the maximum number of semmslsemaphores per semaphore set and is therefore recommended to set semopm equal to semmsl in some situations. Oracle recommends setting the semopm to a value of no less than 100.

File Handles :
When configuring the Linux server, it is critical to ensure that the maximum number of file handles is large enough. The setting for file handles denotes the number of open files that you can have on the Linux system. Use the following command to determine the maximum number of file handles for the entire system:

# cat /proc/sys/fs/file-max 
102312

Oracle recommends that the file handles for the entire system be set to at least 65536.  We can query the current usage of file handles by using the following :

# cat /proc/sys/fs/file-nr
3072    0       102312

The file-nr file displays three parameters:
•        Total allocated file handles
•        Currently used file handles
•        Maximum file handles that can be allocated

If we need to increase the value in /proc/sys/fs/file-max, then make sure that the ulimit is set properly. Usually for Linux 2.4 and 2.6 it is set to unlimited. Verify theulimit setting my issuing the ulimit command 

# ulimit
unlimited

IP Local Port Range  :
Oracle strongly recommends to set the local port range ip_local_port_range for outgoing messages to "1024 65000" which is needed for systems with high-usage. This kernel parameter defines the local port range for TCP and UDP traffic to choose from.
The default value for ip_local_port_range is ports 32768 through 61000 which is inadequate for a successful Oracle configuration. Use the following command to determine the value of ip_local_port_range:

# cat /proc/sys/net/ipv4/ip_local_port_range
32768   61000

Networking Settings  : 
With Oracle 9.2.0.1 and later, Oracle makes use of UDP as the default protocol on Linux for inter-process communication (IPC), such as Cache Fusion and Cluster Manager buffer transfers between instances within the RAC cluster.

Oracle strongly suggests to adjust the default and maximum receive buffer size (SO_RCVBUF socket option) to 1MB and the default and maximum send buffer size (SO_SNDBUF socket option) to 256KB.The receive buffers are used by TCP and UDP to hold received data until it is read by the application. The receive buffer cannot overflow because the peer is not allowed to send data beyond the buffer size window.

This means that datagrams will be discarded if they don't fit in the socket receive buffer, potentially causing the sender to overwhelm the receiver. Use the following commands to determine the current buffer size (in bytes) of each of the IPC networking parameters:

# cat /proc/sys/net/core/rmem_default
109568

# cat /proc/sys/net/core/rmem_max
131071

# cat /proc/sys/net/core/wmem_default
109568

# cat /proc/sys/net/core/wmem_max
131071

Setting Kernel Parameters for Oracle
If the value of any kernel parameter is different to the recommended value, they will need to be modified. For this article, I identified and provide the following values that will need to be added to the /etc/sysctl.conf file which is used during the boot process.

kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.shmall = 2097152
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 1048576
net.core.rmem_max = 1048576
net.core.wmem_default = 262144
net.core.wmem_max = 262144

After adding the above lines to the /etc/sysctl.conf file, they persist each time the system reboots. If we would like to make these kernel parameter value changes to the current system without having to first reboot, enter the following command:

# /sbin/sysctl –p



Enjoy     :-) 

Thursday, April 28, 2011

What is Oracle Streams ?


Oracle Streams enables information sharing. Using Oracle Streams, each unit of shared information is called a message, and we can share these messages in a stream. The stream can propagate information within a database or from one database to another. The stream  routes specified information to specified destinations. The result is a feature that provides greater functionality and flexibility than traditional solutions for capturing and managing messages, and sharing the messages with other databases and applications. Streams provide the capabilities needed to build and operate distributed enterprises and applications, data warehouses, and high availability solutions.

Using Oracle Streams, we control what information is put into a stream, how the stream flows or is routed from database to database, what happens to messages in the stream as they flow into each database, and how the stream terminates. Based on our specifications, Streams can capture, stage, and manage messages in the database automatically, including, but not limited to, data manipulation language (DML) changes and data definition language (DDL) changes. we can also put user-defined messages into a stream, and Streams can propagate the information to other databases or applications automatically. When messages reach a destination, Streams can consume them based on our specifications.
.

                                        Oracle  Streams information flow
What Can Streams Do
The following sections provide an overview of what Streams can do.

1.) Capture Messages at a Database   :     A capture process can capture database events, such as changes made to tables, schemas, or an entire database. Such changes are recorded in the redo log for a database, and a capture process captures changes from the redo log and formats each captured change into a message called a logical change record (LCR). The rules used by a capture process determine which changes it captures, and these captured changes are called captured messages.
The database where changes are generated in the redo log is called the source database. A capture process can capture changes locally at the source database, or it can capture changes remotely at a downstream database. A capture process enqueues logical change records (LCRs) into a queue that is associated with it.

2.) Stage Messages in a Queue    :    Messages are stored (or staged) in a queue. These messages can be captured messages or user-enqueued messages. A capture process enqueues messages into a ANYDATA queue. An ANYDATA queue can stage messages of different types. Users and applications can enqueue messages into an ANYDATA queue or into a typed queue.

3.) Propagate Messages from One Queue to Another   :    Streams propagations can propagate messages from one queue to another. These queues can be in the same database or in different databases. Rules determine which messages are propagated by a propagation.

4.) Consume Messages    :    A message is consumed when it is dequeued from a queue. An apply process can dequeue messages from a queue implicitly. A user, application, or messaging client can dequeue messages explicitly. The database where messages are consumed is called the destination database.

Rules determine which messages are dequeued and processed by an apply process. An apply process can apply messages directly to database objects or pass messages to custom PL/SQL subprograms for processing.
Rules determine which messages are dequeued by a messaging client. A messaging client dequeues messages when it is invoked by an application or a user.

What Are the Uses of Streams?
The following sections briefly describe some of the reasons for using Streams. In some cases, Streams components provide infrastructure for various features of Oracle.

1.) Data Replication  :    Streams can capture DML and DDL changes made to database objects and replicate those changes to one or more other databases . A Streams capture process captures changes made to source database objects and formats them into LCRs, which can be propagated to destination databases and then applied by Streams apply processes.

The destination databases can allow DML and DDL changes to the same database objects, and these changes might or might not be propagated to the other databases in the environment. In other words, we can configure a Streams environment with one database that propagates changes, or we can configure an environment where changes are propagated between databases bidirectionally. Also, the tables for which data is shared do not need to be identical copies at all databases. Both the structure and the contents of these tables can differ at different databases, and the information.

2.) Event Management and Notification   :    Business events are valuable communications between applications or organizations. An application can enqueue messages that represent events into a queue explicitly, or a Streams capture process can capture database events and encapsulate them into messages called LCRs. These captured messages can be the results of DML or DDL changes. Propagations can propagate messages in a stream through multiple queues. Finally, a user application can dequeue messages explicitly, or a Streams apply process can dequeue messages implicitly. An apply process can reenqueue these messages explicitly into the same queue or a different queue if necessary. 

3.) Data Warehouse Loading  :  Data warehouse loading is a special case of data replication. Some of the most critical tasks in creating and maintaining a data warehouse include refreshing existing data, and adding new data from the operational databases. Streams components can capture changes made to a production system and send those changes to a staging database or directly to a data warehouse or operational data store. Streams capture of redo data avoids unnecessary overhead on the production systems. Support for data transformations and user-defined apply procedures enables the necessary flexibility to reformat data or update warehouse-specific data fields as data is loaded. 

4.)  Data Protection    :     One solution for data protection is to create a local or remote copy of a production database. In the event of human error or a catastrophe, the copy can be used to resume processing. we can use Streams to configure flexible high availability environments.
In addition, we can use Oracle Data Guard, a data protection feature that uses some of the same infrastructure as Streams, to create and maintain a logical standby database, which is a logically equivalent standby copy of a production database.

5.)  Database Availability During Upgrade and Maintenance Operations  :  we can use the features of Oracle Streams to achieve little or no database down time during database upgrade and maintenance operations. Maintenance operations include migrating a database to a different platform, migrating a database to a different character set, modifying database schema objects to support upgrades to user-created applications, and applying an Oracle software patch.

Below is an example of  Oracle Streams  Sharing Information Between Databases





















Enjoy        J J J