Saturday, March 26, 2011

Hot Backups,extras redo generated and Fractured Blocks

Today , I  have  come  across  a good  article. I  have worked  and  modify it to explained  in detail . Hope you  all  appreciate  it . Before discussing this topic, let's have an overview of  user-managed  hot backup i.e, what  happen  during  begin  backup  mode  and  end backup  mode.

When  we  begin  backup, it  freezes  the header  of the datafiles (means the SCN number will not increment  any more until the backup is ended ) . It  also  instructs  LGWR  to write  whole  blocks  to  redo the  first  time  a  block  is  touched. Writes  still  occur to  the datafile,  just  the  SCN  is  frozen. This occurs so that at recovery  time, Oracle will  know that  it  must  overwrite  all  blocks  in the backup file with  redo entries  due  to  fracturing. The original  datafiles remain up-to-date, but the backup files will  not be  because they  are  being  changed  during  backup. When  we  end  backup, it unfreezes  the header of the  datafiles  and  allows SCNs  to  be  recorded  properly  during  checkpoint.

Question:  The  oracle documentation  tells  us  that when  we  put  a  tablespace  in  backup  mode , the first  DML  in  the  session  logs  the entire  block  in  the  redo  log buffer  and  not  just the changed vectors.

1.) Can  we  simulate  an example  to  see  this  happening?
2.) What  can  be  the  purpose  of  logging  the  entire  block  the  first  time  and  not  do  the  same subsequently?


Answer:  Below, I’ve created  a simulation. Pay  attention  to  the  “redo size”  statistic in each. First, I have  updated  a  single  row  of  the  employees  table.

SQL> set  autotrace  trace  stat
SQL> update  employees  set  first_name='Stephen'   where   employee_id = 100; 
1 row updated.

Statistics
----------------------------------------------------------
          0  recursive calls
          1  db block gets
          1  consistent gets
          0  physical reads
        292  redo size
        669  bytes sent via SQL*Net to client
        598  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>  rollback;
Rollback  complete.

Notice  the  redo  size  was  only  292  bytes, not  a  very  large  amount.  Now,  let’s  put  the  USERS tablespace  into  hot  backup  mode.

SQL> alter  tablespace  users  begin  backup;
Tablespace altered.

SQL> update  employees  set  first_name = 'Stephen'  where  employee_id = 100;
1 row updated.

Statistics
----------------------------------------------------------
          0  recursive calls
          2  db block gets
          1  consistent gets
          0  physical reads
       8652  redo size
        670  bytes sent via SQL*Net to client
        598  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed

Wow!    Quite  a  bit  of  a difference. This  time,  we  can  see  that  atleast  an  entire  block  was written to redo; 8,652  bytes  total. Let’s  run  it  one more  time, with  the  tablespace still  in  hot  backup mode.

SQL> /  
1 row updated.

Statistics
----------------------------------------------------------
          0  recursive calls
          1  db block gets
          1  consistent gets
          0  physical reads
        292  redo size
        671  bytes sent via SQL*Net to client
        598  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed

This time , it only used 292 bytes, the same as the original amount. However, to address the second question, we’re  going to attempt  changing a  different  block,  by  changing a record in the departments table  instead  of  employees.

SQL> update  departments  set  department_name = 'Test Dept'  where  department_id = 270;
1 row updated.

Statistics
----------------------------------------------------------
         17  recursive calls
          1  db block gets
          5  consistent gets
          1  physical reads
       8572  redo size
        673  bytes sent via SQL*Net to client
        610  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
          1  rows processed

The  result is that  another entire block was written to redo. In  the question, we  stated: “The  oracle documentation  tells  us  that  when  we  put  a  tablespace  in  backup mode ,  the  first  DML  in  the session  logs  the  entire  block  in  the redo log buffer and  not  just  the  changed  vectors” . This  is close, but not right on the mark.


It  is  not  the  first  DML of  the  session, but  the  first  DML to  a  block  that  is  written  to redo However, when Oracle  writes the  first  DML  for  the  block, it   ensures  that  the  redo  logs/archive trail  contains  at  least one  full  representation of  each  block  that  is changed. Subsequent  changes will therefore  be  safe.

This  process  exists  to  resolve  block  fractures.  A  block  fracture  occurs  when  a  block  is being  read  by  the  backup, and  being  written  to  at  the  same  time  by  DBWR . Because  the  OS  (usually) reads  blocks  at  a  different  rate  than   Oracle, the  OS  copy  will   pull pieces  of  an  Oracle  block  at  a time.  What  if  the  OS  copy  pulls  half  a  block, and  while  that  is  happening,  the  block  is  changed  by  DBWR?  When  the  OS  copy  pulls  the  second  half  of  the  block  it  will  result  in  mismatched  halves,  which  Oracle  would  not  know  how  to  reconcile .

This  is  also  why  the SCN  of  the  datafile  header does  not change  when  a  tablespace  enters  hot backup  mode. The  current  SCNs  are  recorded  in  redo, but   not  in  the datafile. This  is  to  ensure  that Oracle  will  always  recover over  the  datafile  contents  with  redo  entries. When  recovery  occurs, the  fractured  datafile  block will  be  replaced  with  a  complete  block  from redo,  making  it whole  again.  After  Oracle  can  be  certain  it  has a  complete  block,  all   it  needs  are  the  vectors.



Enjoy    :-)

Friday, March 25, 2011

What is a Port ?

A port is just a interface (or for example we can imagine our telephone socket on the phone) between two computers or two software  programs. So  if  two computers want  to talk to each other, each  of them should communicate  through  a  port. If  two  software  programs want to talk  with  each  other, each of them should talk through a  port .To know more about  hardware  ports please check.  Hardware port .

A  DBA should mostly  be concerned with  software  ports. This  software  port  is  like a virtual (no physical shape) connection so  software  programs talk with  each other. The  two  well  known  ports  used by  majority  of  software  programs are  the TCP and UDP ports.

During  installation , Oracle  Universal  Installer  assigns  port  numbers  to components from  a  set  of default port numbers. Many Oracle Database components and services use ports. As  an  administrator ,  it is  important  to  know  the  port  numbers  used  by these services, and  to  make  sure  that  the  same  port number  is  not   used  by  two  services  on  our   host .

Most  port numbers are assigned during installation. Every  component and service has an allotted  port range, which  is the set  of  port  numbers  Oracle  Database  attempts to  use when  assigning a  port . Oracle  Database starts  with  the  lowest  number  in  the  range  and  performs  the  following  checks:

I.) Is  the  port  used  by  another  Oracle  Database  installation  on  the  host?
 The  installation  may  be  up or down at  the  time; Oracle  Database  can  still  detect  if  the port  is  used.

II.) Is  the  port  used  by a  process  that  is  currently  running?
This  could be  any  process  on  the  host, even  a  non-Oracle  Database  process.

If  the answer  to  any  of  the  preceding  questions  is  yes, Oracle  Database  moves  to  the  next  highest port in  the  allotted  port  range  and continues  checking  until  it   finds a  free port.

If  we change a port number, it is not updated in the portlist.ini file, so we can only rely on this file immediately after installation. if we want to see all the different port numbers click on Different Port Numbers.

When   we  install oracle enterprise manager on a  server ,oracle automatically assigns some  port numbers to it. So we can open the database control in 10g by using a link like http://servername:1158.com. Here 1158  is the port number.

So, in  this  way  different  oracle  components  uses  different port numbers and  hence we should  know what  a  port  is. Sometimes  problems  would  become because  the  port  is  blocked  by  a  firewall or  the unix administrator has not  configured the port correctly. So in those cases we have to speak to the administrator  to  see  that  the  port  is  configured  correctly.

Enjoy    :-) 


Wednesday, March 23, 2011

The best PTC site to earn money

PTC  site also  named as  Paid To Click.  The  PTC  sites  are the easiest way to earn money online. From the  PTC  sites  you can  easily  earn money  by clicking  your mouse and then viewing  ads. So they allow you  to  make money  without extra  effort, require  almost no  skills. Besides ad viewing there is some referrals and renting  mechanism through  which   also  you can  earn some  additional  money from  them. All you  need  is  to  have  an  Internet  connection,  open  a  PTC site, view the advertisements. You normally will  receive  $0.05   to   $0.02 per ad views and then you can draw money after a certain amount of money is a ccumulated in your account.

There  is some rules  for each  PTC  advertisements .  For  example  you have  to view the advertisements  for  30  seconds to  get  paid. Also  there  is  Terms  of  Service  in each PTC sites. So   before you register in  any  PTC  site  you  first     read  through  their Terms  of  Service. After  reaching  certain amount  of money  you  can  draw  money  from  the site  to  a  PayPal  or  AlertPay  account. For example in neobux PTC  site, The  minimum   payout a  mount is  set  at  $2.00  (two US dollars)  for  your first  payout  request,  $3.00  (three US dollars)  for  the  second,  $4.00 (four US dollars)  for  the  third  and  so on  until $10.00 (ten US dollars).

Note  that,  in  the  PTC  industry    there  is  too many   PTC   sites and   most  of   them are  scam . You register  them,  earn  some  money  in  that  site  account (unpaid yet)  and   whenever  it  is  time  to draw  money  from  their  site  your  account  got  disabled . So  before  working  with  PTC  sites  it  is important  to identify  which  one  is  scam  site  and  which one is good.  Because  it  is  pathetic  that after  you  pass time with them and after viewing advertisement they would pay you nothing.


Following is the lists of some good PTC sites that I have ever heard. 

1.) Neobux:  As  far  as  I  know Neobux  is  genuine site  in the  PTC  site  history.  They  are  free worldwide  service  available  in  both  English  and  Portuguese   language.  Their service  consists  of  allowing  advertisers  to  reach  thousands  of  potential  customers  by  displaying  the  advertisements.

Users  click on the advertiser's advertisement and view it during the amount of time specified by the advertiser. After viewing the ad, the user gets credited with a pre-determined amount of cash on their NeoBux account. I have spend well over $700 through neobux advertisement.

2.) Enbux :  Enbux  is  still  in beta. I  heard  good  news  about them. When  you  become  a  member  of them  you  can  have  following  facility. 

  • You'll be able to earn up to $0.02 cents per click
  • You'll be able to earn up to $0.02 cents per click you referrals makes.
  • Instantly payment/receiving system. · Access to a growing community. · Detailed statistic graphs over your account and referral perfomance.

3.) Bux.ee :  What  you  get  as a  member  at  bux.ee  you  can  earn  up  to  $0.02 (2 cents)  per  click and  up  to $0.01 (1 cents)  for  every  ad  your  referrals  click.
Beside some good PTC site there are a lot of scam PTC site. So be aware of those scam sites. Here is the lists of scam PTC sites that I can remember.


Lists of Scam PTC Site
1.) Hybux
2.) Bux.to
3.) Tuibux
4.) Richptc
5.) BuxWiz
6.) BriteBux
7.) Valuebux
8.) Stockbux
9.) BuxForYou
10.) Qualitybux
11.) Neobux-India
12.) ProjectBux

So be aware of these scam sites.


Enjoy            :-)


Friday, March 11, 2011

Data Pump Architecture in Oracle 11g

Oracle Data Pump was written from the ground up with an architecture designed to produce high performance with maximum flexibility. Understanding the architecture of Data Pump will help us to take advantage of its speed and features.

Data Pump Architecture :



Master Table : 
At the heart of every Data Pump operation is the master table. This is a table created in the schema of the user running a Data Pump job. It is a directory that maintains all details about  the job:   the current state of every object being exported or imported, the locations of those objects in the dumpfile set, the user-supplied parameters for the job, the status of every worker process, the current set of dump files, restart information, and so on. During a file-based export job, the master table is built during execution and written to the dumpfile set as the last step. Conversely, loading the master table into the current user’s schema is the first step of a file-based import operation, so that the master table can be used to sequence the creation of all objects imported. The use of the master table is the key to the ability of Data Pump to restart a job in the event of a planned or unplanned job stoppage. Because it maintains the status of every object to be processed by the job, Data Pump knows which objects were currently being worked on, and whether or not those objects were successfully completed.


Process Structure :






Process Structure :  A Data Pump job comprises several processes. These processes are described in the order of  their creation.

Client Process :  This is the process that makes calls to the Data Pump API.  Oracle Database ships four client utilities of this API.  These have a very similar look and feel to the original exp and imp clients, but have many more capabilities.Data Pump is integrated into Oracle Database, a client is not  required once a job is underway. Multiple clients may attach and detach from a job as necessary for monitoring and control.

Shadow Process :  This is the standard Oracle shadow (or foreground) process created when a client logs into Oracle Database. The shadow services Data Pump API requests.1 Upon receipt of a DBMS_DATAPUMP.OPEN request, the shadow process creates the job, which consists primarily of creating the master table, the Advanced Queuing (AQ) queues used for communication among the various processes, and the master control process. Once a job is running, the main task of the shadow process consists of servicing GET_STATUS requests   from the client. If the client detaches, the shadow process also goes away.

Master Control Process (MCP) : As the name implies, the MCP controls the execution and sequencing of a Data Pump job. There is one MCP per Data Pump job, maintaining the job state, job description, restart, and dumpfile information in the master table. A job is divided into various phases of metadata and data unloading or loading, and the MCP hands out work requests to the worker processes appropriate for the current phase. The bulk of MCP processing is performed in this work dispatch loop. The MCP also performs central file management duties, maintaining the active dumpfile list and handing out file pieces as requested by processes unloading data or metadata. An MCP has a process name of the form: <instance>_DMnn_<pid>.

Worker Process : Upon receipt of a START_JOB request, the MCP creates worker processes as needed, according to the value of the PARALLEL parameter. The worker processes perform the tasks requested by the MCP (primarily unloading and loading of metadata and data), and maintain the object rows that make up the bulk of the master table. As database objects are unloaded or loaded, these rows are written and updated with the current status of these objects: pending, completed, failed, and so on. The worker processes also maintain type completion rows, which describe the type of object currently being worked on: tables, indexes, views, and so on. These types completion rows are used during restart. A worker process has a name of the form: ”*DWnn*”.

Parallel Query (PQ) Process :  If the External Tables data access method is chosen for loading or unloading a table or partition, some parallel query processes are created by the worker process that was given the load or unload assignment, and the worker process then acts as the query coordinator. These are standard parallel execution slaves that exploit the parallel execution architecture of Oracle Database, and enable intra-partition loading and unloading. The Data Pump public API is embodied in the PL/SQL package DBMS_DATAPUMP.

Data Movement : Data Pump supports four methods of data movement, each of which has different performance and functional characteristics. In descending order of speed, these four methods are:
  •  Data File Copying (transportable tablespaces) 
  • Direct Path load and unload
  • External Tables
  • Conventional Path

Data Pump will choose the best data movement method for a particular operation. It is also possible for the user to specify an access method using command line parameters.The fastest method of moving data is by copying the database data files that contain the data without interpretation or altering of the data. This is the method used to move data when transportable mode is specified at export time. There are some restrictions on the use of data file copying. Some types of data, some types of tables, and some types of table oganization cannot be moved with this method. For example, tables with encrypted columns cannot be  moved using this access method. In addition, the character sets must be identical on both the  source and target databases in order to use data file copying.

Direct path and external tables are the two main data access methods provided by Oracle Database 11g. The direct path access method is the faster of the two, but does not support intra-partition parallelism. The external tables access method does support this function, and  therefore may be chosen to load or unload a very large table or partition. Each access method also has certain restrictions regarding the use of the other. For example, a table being loaded with active referential constraints or global indexes cannot be loaded using the direct path access method. A table with a column of data type LONG cannot be loaded with the external
tables access method. In most cases, you need not be concerned about choosing an access method; the Data Pump job will make the correct choice based on an array of job characteristics. Both methods write to the dumpfile set in a compact, binary stream format that is approximately 15 percent smaller than the original exp data representation.

When neither direct path nor external tables can handle the data to be imported, Data Pump uses a method called conventional path. For example, a table that contains an encrypted column and a LONG column would be imported using conventional path because direct path cannot be used to import encrypted columns and external tables cannot be used to import LONG columns. Data loading with the conventional access method is much slower than the direct path and external tables methods. So, the Data Pump uses this method only when it has no other choice.

Metadata Movement : The Metadata API (DBMS_METADATA) is used by worker processes for all metadata unloading and loading. Unlike the original exp function (which stored object definitions as (SQL DDL), the Metadata API extracts object definitions from the database, and writes them to the dumpfile set as XML documents. This allows great flexibility to apply XML Style sheet Language Transformations (XSLTs) when creating the DDL at import time. For example, an object’s ownership, storage characteristics, and tablespace residence can be changed easily during import. This robust XML might take up more dumpfile space than the old style SQL DDL, but it provides more flexibility and features. In addition, the COMPRESSION parameter can be used to decrease the size of metadata written during a Data Pump export job.

Interprocess Communication : Advanced Queuing (AQ) is used for communicating among the various Data Pump processes. Each Data Pump job has two queues:
  • Command and control queue: All processes (except clients) subscribe to this queue. All API commands, work requests and responses, file requests, and log messages are processed on this queue.
  • Status queue: Only shadow processes subscribe to read from this queue. It is used to receive work-in-progress and error messages queued by the MCP. The MCP is the only writer to this queue.

File Management : The file manager is distributed across several parts of the Data Pump job. The actual creation of new files and allocation of file segments is handled centrally within the MCP. However, each worker and parallel query process makes local process requests to the file manager to allocate space, read a file chunk, write to a buffer, or update progress statistics. The local file manager determines if the request can be handled locally and if not, forwards it to the MCP using the command and control queue. Reading file chunks and updating file statistics in the master table are handled locally. Writing to a buffer is typically  handled locally, but may result in a request to the MCP for more file space.

Directory Management : Because Oracle background server processes handle all dumpfile set I/O,  not the user running the job. This presents a security dilemma because oracle is typically a privileged account. Therefore, all directory specifications are made using Oracle directory objects with read/write grants established by the DBA. 

For example, the DBA may set up a directory as follows:

Step 1 : Create Directory 
SQL> Create directory datapump as ‘D:\dpump\’;

Step 2 :  Grant privileges to user 
SQL> Grant read, write on directory datapump to scott;

Then scott can specify a dump file on the expdp command line as:

C:\>expdp   scott/tiger   dumpfile=hr_emp_tab.dmp   tables=hr.employees   directory=datapump logfile=hr_emp_log.log


Enjoy          J J J


Wednesday, March 9, 2011

ORACLE_HOME and ORACLE_BASE

ORACLE_HOME specifies the directory containing the Oracle software for a given release. It corresponds to the environment in which Oracle Database products run. This environment includes the location of installed product files, the PATH variable pointing to the binary files of installed products, registry entries, net service names, and program groups.The Optimal Flexible Architecture (OFA) recommended value is: $ORACLE_BASE/product/release/db_1/ .    
For example :     /u01/app/oracle/product/10.2.0/db_1.

ORACLE_BASE specifies the directory at the top of the Oracle software and administrative file structure. The value recommended for an OFA configuration is software_mount_point/app/oracle.
For example: /u01/app/oracle.

If  we are not using an OFA-compliant system, then we do not have to set  ORACLE_BASE, but it is highly recommended that we should set it.We can find the ORACLE_HOME  from below steps : 

Oracle  9i /10g/11g
SQL> select NVL(SUBSTR(FILE_SPEC, 1, INSTR(FILE_SPEC, '\', -1, 2) -1) , SUBSTR(FILE_SPEC, 1, INSTR(FILE_SPEC, '/', -1, 2) -1))  FOLDER  FROM DBA_LIBRARIES  WHERE LIBRARY_NAME = 'DBMS_SUMADV_LIB' ;
Output : 
FOLDER
-------------------------------------
C:\app\neerajs\product\11.2.0\dbhome_1


Oracle 10g
SQL > VAR OHM VARCHAR2(100);
SQL > EXEC DBMS_SYSTEM.GET_ENV('ORACLE_HOME', :OHM) ;
SQL > PRINT OHM ;


Linux/Unix echo $ORACLE_HOME
If we  fire command  ps -ef | grep tns
then tns entry details and also shows full path of oracle home.Because if env is not set then echo$ORACLE_HOME does not work.


Windows :
1. Click Start > Run 
2. Type "regedit" and press Return/Enter
3. The registry will now be displayed
4. Expand the folder as HKEY_LOCAL_MACHINE – SOFTWARE – ORACLE


Enjoy       J J J


Tuesday, March 8, 2011

SYS A "Magic" USER

SYS is default users, created with the creation of the database. Although sys have much power as it is granted the DBA role but still an ordinary users. Because SYS owns the data dictionary, it is considered a bit more special . 
Anytime, someone connects as SYSDBA, it turns out it's being SYS. That is, if SYSDBA is granted to HR and hr connects as SYSDBA and select user from dual, it reveals he's actually SYS.  SYS is also special  that it is not possible to create a trigger in the sys schema. Also, a logon trigger is not executed when sys connects to the database.

Never ever create objects in the SYS schema.The objects belonging to SYS cannot be exported.We can’t do below database work with any other  user except SYS (SYSDBA).

1.) Database Full Recovery (FULL, UNTIL CANCEL, UNTIL TIME, UNTIL SCN)
2.) Change Database Character Set 
3.) Create Database
4.) Drop Database
5.) Startup and Shutdown
Note : Sysdba privileges cannot be revoked from "sys" user.


Enjoy          J J J


Identify Your Oracle Database Software Release



To understand the release nomenclature used by Oracle, examine the following example of an Oracle Database server labeled "Release 10.1.0.1.0".
Oracle Database continues to evolve and can require maintenance, Oracle periodically produces new releases. Not all customers initially subscribe to a new release or require specific maintenance for their existing release. As a result, multiple releases of the product exist simultaneously.As many as five numbers may be required to fully identify a release. The significance of these numbers is

Release Number Format
                      
Note:
Starting with release 9.2, maintenance releases of Oracle Database are denoted by a change to the second digit of a release number. In previous releases, the third digit indicated a particular maintenance release.
Major Database Release Number :  The first digit is the most general identifier. It represents a major new version of the software that contains significant new functionality.
Database Maintenance Release Number :  The second digit represents a maintenance release level. Some new features may also be included.
Application Server Release Number :  The third digit reflects the release level of the Oracle Application Server (OracleAS) .

Component-Specific Release Number : The fourth digit identifies a release level specific to a component. Different components can have different numbers in this position depending upon, for example, component patch sets or interim releases.

Platform-Specific Release Number :  The fifth digit identifies a platform-specific release. Usually this is a patch set. When different platforms require the equivalent patch set, this digit will be the same across the affected platforms.

Checking The Current Release Number :  To identify the release of Oracle Database that is currently installed and to see the release levels of other database components we are using, query the data dictionary view product_component_version. A sample query follows.(We can also query the v$version view to see component-level information.) Other product release levels may increment independent of the database server.

SQL> select * from product_component_version;
PRODUCT                                                  VERSION     STATUS
--------------------------                         -----------  -----------
NLSRTL                                                     10.2.0.1.0   Production
Oracle Database 10g Enterprise Edition           10.2.0.1.0   Production
PL/SQL                                                     10.2.0.1.0   Production

It is important to convey to Oracle the results of this query when we report problems with the software .




Enjoy          J J J