Tuesday, September 6, 2011

V$ Views over the years

The  Dynamic  Performance Views are  very  helpful  in  monitoring our  database  for real  time performance. The dynamic  performance  views (we  will call  them the V$ views to shorten  the  name)  are real-time or almost real time views into the guts of Oracle.  

Scripts  are  now  floating  around  which  take  advantage  of  these  views  to  supply  detailed  information about  what  is  going  on  in  the SGA  in  near-real  time. It  is  not  uncommon  to  see  scripts  which  join v$session  to  v$sqlarea  to  v$sqltext  to  get  details  of  what  SQL  is  being  run  by  which  user  right  now  and  how expensive that SQL  is.

The  V$ Views  are  like the  speedometer and  the  tachometer in our car, they  tell  us  how  fast  the  car (or the database) is  going (or not), or  like  the timing  light  that  helps  us  to  adjust  the timing .  They provide  almost  immediate  feedback  as  to  the  condition  of  the  database. Below  are  the  stats  which shows how rapidly oracle dynamic views are increasing. 

Version                 V$ Views          X$ Tables
---------                  -----------            -----------
6                              23                    ? (35)
7                              72                      126
8.0                           132                     200
8.1                           185                     271
9.0                           227                     352
9.2                           259                     394
10.1.0.2                   340 (+31%)        543 (+38%)
10.2.0.1                   396                     613
11.1.0.6                   484 (+22%)        798 (+30%)


Enjoy        :-) 

Monday, September 5, 2011

Flashback Data Archive (FBDA) In Oracle 11g


Flashback Data Archive (Oracle Total Recall) provides the ability to track and store all transactional changes to a table over its lifetime. It is no longer necessary to build this intelligence into our application. A Flashback Data Archive is useful for compliance with record stage policies and audit reports.

Prior to oracle 11g, Flashback technology to a large part was based on the availability of undo data or flashback logs and both the undo data as well as flashback logs are subject to recycling when out of space pressure exists. The UNDO tablespace in Oracle was primarily meant for transaction consistency and not data archival. A Flashback Data Archive is configured with retention time. Data archived in the Flashback Data Archive is retained for the retention time.Let’s look at an example

Creating a Flashback Data Archive :

SQL> create flashback archive near_term
           tablespace users
           retention 1 month ;
Flashback archive created.

The archive is created in the tablespace Users. Assume we have to record changes to a table called employees which is in "HR" schema . All we need to do is enable the Flashback Data Archive status of the table to start recording the changes in that archive .

SQL> alter table hr.employees flashback archive near_term;
Table altered.

This puts the table into the Flashback Data Archive mode. All the changes to the rows of the table will be now tracked permanently. 

SQL> select salary,job_id from hr.employees where employee_id=121;
SALARY            JOB_ID
---------             ----------
 8200                 ST_MAN

SQL> update hr.employees set salary=50000 where employee_id=121;
1 row updated.

SQL> commit;
Commit complete.

Now, if we select the row, it will always display 50000 in this column. To find out the older value as of a certain time, we can use the Flashback query as shown below

SQL> select salary from hr.employees as of timestamp to_timestamp ('09/5/2011 10:55:00','mm/dd/yyyy hh24:mi:ss')  where employee_id =121;
SALARY
---------
8200

Now, after some time, when the undo data has been purged out of the undo segments, query the flashback data again:

SQL> select salary from hr.employees as of timestamp to_timestamp ('09/5/2010 10:55:00','mm/dd/yyyy hh24:mi:ss')  where  employee_id =121 ;
SALARY
---------
8200

It comes back with the result :500000 The undo is gone, so where did the data come from .We can do that using autotrace and see the execution plan:

SQL> set autotrace traceonly explain

SQL> select salary from hr.employees as of timestamp to_timestamp ('09/5/2010 10:55:00','mm/dd/yyyy hh24:mi:ss')  where employee_id =121;

Check the explain Plan detail by clicking below link :
http://www.4shared.com/document/WXMMFOS8/fda_explain_tab.html

This output answers the riddle  “Where did the data come from?”;  it came from the table  SYS_FBA_HIST_68909,  which is a location in the Flashback Archive we defined earlier for that table. We can check the table but it’s not  supported by Oracle to directly peek at that data there. Anyway, I don’t see a reason we would want to do that. 
The data inside the archive is retained but until how long? This is where the retention period comes into play. It’s retained up to that period. After that, when new data comes in, the older data will be purged. We can also purge it ourself,
 e.g. 
SQL> alter flashback archive near_term purge before scn xxxxxxxx;

Disable flashback :  Disable flashback archiving for the table employee :
SQL> ALTER TABLE hr.employee NO FLASHBACK ARCHIVE;

Remove Flashback Data Archive and all its historical data, but not its tablespaces:
SQL> DROP FLASHBACK ARCHIVE near_term ;

Use Cases :  
Flashback Data Archive is handy for many purposes. Here are some ideas:
• To audit for recording how data changed
• To enable an application to undo changes (correct mistakes)
• To debug how data has been changed
• To comply with some regulations that require data must not be changed after some time. Flashback Data
Archives are not regular tables so they can’t be changed by typical users.
• Recording audit trails on cheaper storage thereby allowing more retention at less cost

For more detail visit :
http://www.oracle.com/pls/db111/search?remark=quick_search&word=flashback+data+archive


Enjoy     :-) 

Saturday, September 3, 2011

Difference Between Upgradation and Migration in Oracle


Upgradation   :  Upgrade is the process of replacing our existing software with a newer version of the same product. For example, replacing oracle 9i release to oracle 10g release . Upgrading our applications usually does not require special tools. Our existing reports should look and behave the same in both products. However, sometimes minor changes may be seen in product .Upgradation is done at Software level.

Migration   :   Migration is the process of replicating applications from one product in another product, for example, transforming existing oracle 9i  applications to oracle 10g applications.A migration is any change that transforms our hardware and/or software architecture to a new state. Migration is done as database level(say  migrating from DB2 to Oracle).


Enjoy  :-) 

Invisible Indexes in Oracle 11g


An invisible index is an index that is ignored by the optimizer unless we explicitly set the OPTIMIZER_USE_INVISIBLE_INDEXES initialization parameter to TRUE at the session or system level.   One use of the invisible index feature is to test the removal of an index before dropping it.  Prior to 11g, this was typically achieved by making an index unusable during a set period of time.  During this observation period, the DBA would monitor the database performance to determine whether or not to drop the index.  If performance was negatively affected, the index would need to be rebuilt before it could be used again.  Beginning with Oracle 11g, the DBA has the option of making the index  invisible as opposed to unusable during this observation period.  If performance degradation is observed, the index can be made visible again without having to rebuild the indexes.

Another potential use for invisible indexes is in situations where specific applications require an index temporarily.  An index can be created as invisible to allow specific SQL statements to use the index while leaving the rest of the database unaffected.  

The effective usage of an index is always a big question. There are instances in production environments, where an index may help the performance of just a single query, while it can degrade the performance of a number of queries. Always the Optimizer can’t guarantee the best suitable index required for the query in an environment which is cluttered by too many indexes on a table. Below is  demo of invisble Indexes 

SQL> create table test1 as select * from all_objects;
Table created.

SQL> select count(*) from test1;
COUNT(*)
-------------
 71480

SQL> create index obj_idx on test1(object_name) invisible;
Index created.

SQL> select index_name,VISIBILITY from user_indexes where index_name='OBJ_IDX';
INDEX_NAME                     VISIBILITY
-------------------                    --------------
OBJ_IDX                             INVISIBLE

There is a new initialization parameter modifiable at system or session level called 
OPTIMIZER_USE_INVISIBLE_INDEXES. This parameter is FALSE as default, meaning the optimizer will not consider invisible indexes when evaluating alternative execution plans.

SQL> sho parameter OPTIMIZER_USE_INVISIBLE_INDEXES
NAME                                                          TYPE          VALUE
---------------------------------------                 -----------      -------------
optimizer_use_invisible_indexes              boolean         FALSE

We can alter an existing index to become invisible or visible.

SQL> alter index OBJ_IDX visible;
Index altered.

SQL> select index_name,VISIBILITY from user_indexes where index_name='OBJ_IDX';
INDEX_NAME                     VISIBILITY
-------------------                 --------------
OBJ_IDX                            VISIBLE

SQL> alter index OBJ_IDX invisible;
Index altered.

SQL> select index_name,VISIBILITY from user_indexes where index_name='OBJ_IDX';
INDEX_NAME                     VISIBILITY
--------------------                   --------------
OBJ_IDX                            INVISIBLE

We can see that the invisible index is not considered by the optimizer using a 10053 trace.

SQL> alter session set events '10053 trace name context forever, level 1';
Session altered.

sql> explain plan for select * from TEST1 where OBJECT_NAME='ALL_OBJECTS';
Explained.

***************************************
BASE STATISTICAL INFORMATION
***************************************
Table Stats::
  Table: OBJ$  Alias: ITO
    #Rows: 73375  #Blks:  905  AvgRowLen:  83.00
Index Stats::
  Index: I_OBJ1  Col#: 1 3 7
    LVLS: 1  #LB: 206  #DK: 73375  LB/K: 1.00  DB/K: 1.00  CLUF: 963.00
UNUSABLE

SQL> explain plan for select * from test1 where object_name='ALL_OBJECTS';
Explained.

SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------
Plan hash value: 4122059633
--------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |     2 |   194 |   286   (1)| 00:00:04 |
|*  1 |  TABLE ACCESS FULL| TEST1 |     2 |   194 |   286   (1)| 00:00:04 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------------------------------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
   1 - filter("OBJECT_NAME"='ALL_OBJECTS')

13 rows selected.

We can change the parameter OPTIMIZER_USE_INVISIBLE_INDEXES to see if the index will be used.

SQL> alter session set optimizer_use_invisible_indexes=true;
Session altered.

SQL> explain plan for select * from test1 where object_name='ALL_OBJECTS';
Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------
Plan hash value: 1376202287
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 316 | 4 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| T | 2 | 316 | 4 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | T_IND | 2 | | 3 (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("OBJECT_NAME"='ALL_OBJECTS')
Note   ::    dynamic sampling used for this statement

14 rows selected.


Enjoy    :-)  


Thursday, September 1, 2011

Virtual Column in Oracle 11g


Oracle 11g has introduced a new feature that allows us to create a "virtual column", an empty column that contains a function upon other table columns (the function itself is stored in the data dictionary). Oracle 11g enables us to store expressions directly in the base tables themselves as virtual columns.Virtual columns are more flexible than any of their prior alternatives.

The syntax for defining a virtual column is listed below.
column_name [datatype] [GENERATED ALWAYS] AS (expression) [VIRTUAL]

Creating a virtual column :     Here we will create a table witha single column as follows .

SQL> create table vir_tab 
     ( id number , 
       sal number , 
       bonus number generated always as (id + sal) virtual 
     );
 Table created.

We can see that the virtual column is generated from a simple expression involving the other columns in our table. Note that the VIRTUAL keyword is optional and is included for what Oracle calls "syntactic clarity"

Virtual column values are not stored on disk. They are generated at runtime using their associated expression (in our example, N1 + N2). This has some implications for the way we insert data into tables with virtual columns, as we can see below.

SQL> insert into vir_tab values (100,11000,11100);
INSERT INTO t VALUES (10, 20, 30)
            *
ERROR at line 1:
ORA-54013: INSERT operation disallowed on virtual columns

Hence ,we cannot explicitily add data to virtual columns, so we will attempt an insert into the physical columns only as follows

SQL> insert into vir_tab values(100,11000);
INSERT INTO t VALUES (10, 20)
            *
ERROR at line 1:
ORA-00947: not enough values

Here we see that we cannot insert columns,they are still considered part of the table's column list.This means that we must explicitily reference the physical columns in our insert statements, as follows .

SQL> insert into vir_tab (id,sal) values (100,11000);
1 row created.

Now we have successfully inserted the data and can query the table.

SQL> select * from vir_tab;

  ID        SAL      BONUS
---- ---------- ----------
 100      11000      11100


Indexes and Constraints on virtual columns :

SQL> create index sal_idx on vir_tab(sal);

For Constraint :

SQL> alter table vir_tab add constraint vir_id_pk primary key(id);

Benefits of Virtual Columns :

  • Automatic re-computation of derived columns for ad-hoc query tools
  • Reduction in redundant disk space for columns that must be derived from other columns (e.g. a MONTH column that is derived from another DATE column).
  • Easier for interval partitioning


There are few restrictions on the virtual columns.
1.   We cannot write/insert the data into virtual columns.
2.   There is no support for index_organized, external, object, cluster, temporary tables.
3.   There is no support for Oracle-supplied datatypes, user-defined datatypes, LOBs, or LONG RAWs.
4.   We can partition the table based on virtual column .We can use below query to define virtual columns     defined in the users schema.

SQL> SELECT TABLE_NAME, COLUMN_NAME, DATA_TYPE, HIDDEN_COLUMN
FROM USER_TAB_COLS   WHERE VIRTUAL_COLUMN = ‘YES’;
5. we can not create virtual columns on Temporary tables, object types, clusters, External tables and Index Organized Tables

Reference ::  http://www.oracle-developer.net/display.php?id=510


Enjoy     :-)