Tuesday, February 7, 2012

ORA-01034 & ORA-27101: Shared Memory Realm Does Not Exist

ORA-27101 and ORA-01034  combined may occurs in various scenarios .One of the reason for this error is that the database is not up. When we try to connect with oracle database or sometimes during installation we may get this error . We may face this error in some other scenario's too .  Here are few possible scenario's (  some scenario's are from window platform and some are from Linux  platform ) .


Case 1 :  When try to connect with normal user we get the error as

C:\>sqlplus
SQL*Plus: Release 11.2.0.1.0 Production on Mon Feb 6 11:22:08 2012
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
Enter user-name: scott
Enter password:
ERROR:
ORA-01034: ORACLE not available
ORA-27101:  shared memory realm does not exist
Process ID: 0
Session ID: 0 Serial number: 0

To solve this issue , set the ORACLE_SID and start the database using the "sys" user as sysdba and start the database ,

C:\>set ORACLE_SID=noida
C:\>sqlplus sys/xxxx@noida as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Mon Feb 6 11:25:13 2012
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
Connected to an idle instance.
SQL> startup
ORACLE instance started.

Total System Global Area  535662592 bytes
Fixed Size                  1375792 bytes
Variable Size             331350480 bytes
Database Buffers          197132288 bytes
Redo Buffers                5804032 bytes
Database mounted.
Database opened.

Case  2 :  On a system with multiple IP addresses, when we connect with sqlplus locally on the server  (i.e. "sqlplus user/password") everything is OK, but connecting through a TNS alias, either from the network or locally on the server, we get the the following errors :  

$ sqlplus user/password@db_alias
ERROR:
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
SVR4 Error: 2: No such file or directory

We need to Manually startup the database or check whether the listener is started or not . So either connect as
$export ORACLE_SID=noida 
$ sqlplus sys/password as sysdba 
SQL>startup 
or
check the status of listener if not started then start the listener  as
$lsnrctl
lsnrctl> stop
lsnrctl>start
lsnrctl>exit
$export ORACLE_SID=noida
$sqlplus sys/password@noida as sysdba
SQL> startup


Case 3 :  Make sure while connecting, that the ORACLE_SID and ORACLE_HOME is correctly set . There should not be trailing trash in ORACLE_HOME  path . Remove the extra "/" from the end of ORACLE_HOME

Incorrect Home location:    (ORACLE_HOME = /u01/app/oracle/product/11.2.0/)
Correct Home location :     (ORACLE_HOME = /u01/app/oracle/product/9.2.0)

Once , we have correctly set  the path  then  reload  the listener and  set or export  ORACLE_SID  before connecting as :
Linux  : 
$export ORACLE_SID=noida 
$sqlplus sys/xxxx@noida as sysdba
SQL> startup


Window  : 
C:\> set ORACLE_SID=noida
C:\> sqlplus sys/xxxx@noida as sysdba
C:\> startup 


Case 4 :   Sometimes in case of window ,  if event log is full , then we get this error . So delete the event logs and try to connect again .

Case 5 :  Sometimes , in case of window , we resolve this issue by simply restart the oracle services . so restart the window services as
start --> cmd  --- > net stop oracleserviceXXXX
--- >> net start oracleservice .


Enjoy     :-) 


Friday, February 3, 2012

SQL*Plus Error Logging in Oracle 11g


SQL*Plus  is the commonly used tools by the DBAs . Sql*Plus Error Logging is one of the new useful feature in Oracle 11g .It provides additional methods of trapping errors . When error logging is enabled, it records sql , pl/sql  and sql*plus  errors and associated parameters in an error log table(SPERRORLOG by default) and we can then query the log table to review errors resulting from a query.
Note : It is a 11g SQL*Plus feature not with database engine.


Why  Error Logging ?
We normally spool the syntax to capture the errors from the scripts  and track the spool logs for the error output . This is work fine for single or few script but cumbersome when multiple scripts are involved.Secondly we need the OS path to store the scripts,permission and all . To overcome from this  scenario's Error Logging is useful feature  to capture and locate the errors in the database table rather than the OS  files .


Steps to Activate the Error Logging : 


1.) Check the status of Error Logging :  To check the status of error logging , fire the below command  
SQL> show errorlogging
errorlogging is OFF
Note: Error logging is set OFF by default.


2.) Enable  the Error Logging :  Whenever we enable the error loging the default table SPERRORLOG is created . Enable by using the below command 
SQL> set errorlogging on
SQL> show errorlogging
errorlogging is ON TABLE  SCOTT.SPERRORLOG
As, we see that the default table  "SPERRORLOG" is created in scott schemas, since the current user is scott . Hence, sperrorlog table is created current user .


Creating a User Defined Error Log Table :
We can create one or more error log tables to use other than the default . Before specifying a user defined error log table , let's  have look on default errorlog 


SQL> desc  sperrorlog
 Name                        Null?                   Type
----------------            --------         --------------------
 USERNAME                                 VARCHAR2(256)    
 TIMESTAMP                                TIMESTAMP(6)
 SCRIPT                                        VARCHAR2(1024)
 IDENTIFIER                                 VARCHAR2(256)
 MESSAGE                                    CLOB
 STATEMENT                                CLOB


For each error, the error logging feature logs the following bits of information. To use a user defined log table, we must have permission to access the table, and we must issue the SET ERRORLOGGING command with the TABLE schema.tablename option to identify the error log table and the schema if applicable. Here is syntax to create  user-defined table..
SQL> set errorlogging on table [schema].[table]
for example :  
SQL> set errorlogging on table  hr.Error_log_table 


Demo to create user-defined table


Step 1 : Create the table : If we want create the error logging user-defined table and if table doesnot exist then get the below error as 
SQL> set errorlogging on table  hr.Error_log_table
SP2-1507: Errorlogging table, role or privilege is missing or not accessible


create  the table as  
SQL> create table  Hr.Error_log_table ( username   varchar(256),  timestamp  TIMESTAMP,  script    varchar(1024),  identifier varchar(256),  message  CLOB,  statement  CLOB) ;  
Table created.


Step 2 :   Create user-defined error logging table  
SQL> show errorlogging
errorlogging is OFF
SQL> set errorlogging on table  hr.Error_log_table
SQL> show errorlogging
errorlogging  is  ON TABLE   hr.Error_log_table


Step 3 :  Generate some errors 
SQL> selet  *   from  employees ;
SP2-0734: unknown command beginning "selet * fr..." - rest of line ignored.
SQL> select  *   from  employe ;
select * from employe
              *
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> set linesze 2000
SP2-0158: unknown SET option "linesze"


Step  4 : Check the error logging from the user-defined  errorlog 
SQL> select  *  from  hr.Error_log_table ;
SQL> commit ;

Without commit, other sessions won’t see this information. Here i have commit it and taken the output from other session for the sake of proper formatting purpose. 


We can truncate to clear all existing rows in the error log table and begins recording errors from the current session. as 
SQL> set errorlogging on truncate
SQL> select * from  Error_log_table ;
No rows selected


We can also set an unique identifier to make it easier to identify the logging record.We can use it to identify errors from a particular session or from a particular version of a query.
SQL> set errorlogging on identifier  'MARK' 
SQL > select  *  from  employ ; 
select * from employ
              *
ERROR at line 1:
ORA-00942: table or view does not exist


Now check the identifier : 
SQL> select  *  from  hr.Error_log_table ; 

We can  delete records as the regular table.
SQL> delete hr.Error_log_table  where IDENTIFIER='MARK' ; 
SQL> commit;


Disable Error Logging : 
SQL> set errorlogging OFF
SQL> show errorlogging
errorlogging is OFF




Enjoy    :-) 



Saturday, January 21, 2012

Cross-Platform Transportable Tablespaces in Oracle

Oracle transportable tablespaces are the fastest way for moving large volumes of data between two Oracle databases. Starting with Oracle Database 10g, we can transport  tablespaces across platforms . Furthermore, like import and export, transportable tablespaces provide a mechanism for transporting metadata in addition to transporting data. Below are the steps to perform the cross platform transport tablespace .

Prerequisites :
There are few points which has to be considered before performing the Transportable tablespaces . The followings are : 

1.) The source and target database must use the same character set and national character set .

2.) We cannot transport a tablespace to a target database in which a tablespace with the same name already exists. However, we can rename either the tablespace to be transported or the destination tablespace before the transport operation.

3.) Objects with underlying objects (such as materialized views) or contained objects (such as partitioned tables) are not transportable unless all of the underlying or contained objects are in the tablespace set.

4.) If the owners of tablespace objects does not exist on target database,  the usernames need to be created manually before starting the transportable tablespace import. If  we use spatial indexes, then:
  • be aware that TTS across different endian platforms are not supported  for spatial indexes in 10gR1 and 10gR2; such a limitation has been released in 11g .
  • Specific Spatial packages must be run before exporting and after transportation, please see Oracle Spatial documentation.
5.) We cannot transport the SYSTEM  tablespace or objects owned by the user SYS . 

6.) Opaque Types (such as RAW, BFILE, and the AnyTypes) can be transported, but they are not converted as part of the cross-platform transport operation. Their actual structure is known only to the application, so the application  must address any endianness issues after these types are moved to the new  platform .

7.) Floating-Point Numbers BINARY_FLOAT and BINARY_DOUBLE types are transportable using Data Pump but not the original export utility, EXP.

If we are migrating a database, then make sure there are no invalid objects in the source database before making the export. Take a full no rows export to recreate objects that won't be transported with TTS or remove all the invalid objects by running utlrp.sql scripts . Keep the source database viable until we have determined all objects are in the target database and there are no issues (i.e. the target database has been thoroughly checked out ).

Here we are performing the cross platform transport tablespace demo .The details about the sorce and target are as below : 

Source : 
OS                      = Redhat Linux (32 bit)
Oracle Version    = Oracle 10.2.0
Database Name  = comcast 
Tablespace         = TES_TBS (which is to be transported)

Target : 
OS                      = Microsoft Window (32 bit)
Oracle Version    = Oracle 11.2.0
Database Name  =  noida

Step 1 : Determine Platforms Support and  Endianness (on source) :
This step is only necessary if we are transporting the tablespace set to a platform different from the source platform. Determine whether cross-platform tablespace transport is supported for both the source and target platforms, and also determine the endianness of each platform . We can query the v$transportable_platform view to see the platforms that are supported, and to determine each platform's endian format (byte ordering). 
Now check the endians by below query:  
Source endians :

SQL> select  d.platform_name, endian_format  from   v$transportable_platform  tp ,   v$database d   where  tp.PLATFORM_NAME  =  d.PLATFORM_NAME ;
PLATFORM_NAME                   ENDIAN_FORMAT
-----------------------                 ----------------------
Linux IA (32-bit)                         Little

Target endians  :

SQL>  select d.platform_name,endian_format from v$transportable_platform tp , v$database d where 
tp.PLATFORM_NAME = d.PLATFORM_NAME;
PLATFORM_NAME                            ENDIAN_FORMAT
-----------------------------                   ---------------------
Microsoft Windows IA (32-bit)             Little

Here , we notice that the both source and target have same endians . If we have different endians then follow step 2 else skip it and move to step 3 .

Step 2 :  Different endian formats ( skip step 2 if  having same endians ) :
If  the endian formats are different  then a conversion is necessary for transporting the tablespace. For example, run the below command to convert the tablespace of Source(Linux 64 bit)  to Target(Solaris 64 bit) platform .

i.>  Using  CONVERT  Tablespace   FROM  PLATFORM  on  Source  host

RMAN> convert tablespace test_user_tbs 
2> to platform ‘Solaris[tm] OE (64-bit)'
3> format='/oradata/rman_backups/%N_%f' ;

The data file “test_user_tbs01.dbf” is not touched and a new file will be created for Solaris platform under “/oradata/rman_backups“and copy the file to Target platform. Use RMAN's CONVERT command to convert the datafiles to be transported to the destination host's format. The converted datafiles are stored in “/solaris/oradata”.

ii.>  Using CONVERT DATAFILE... FROM PLATFORM on Destination host

RMAN> convert datafile test_user_tbs01.dbf
2> from platform ‘Linux IA (64-bit)'
3> db_file_name_convert ‘/linux/oradata/’ ‘/solaris/oradata’

Let's have a demo of the transportable tablespace . 

Step 3  : Create a tablespace and user  : 
Here we will create the Tablespace and User on the source database and this tablespace will be transported further .

SQL> create tablespace tes_tbs datafile '/home/oracle/oradata/comcast/tes_tbs.dbf' size 100m ;
Tablespace created

SQL> create user tes identified by tes 
  2  default tablespace tes_tbs 
  3  quota unlimited on tes_tbs ; 
User created.

SQL> grant resource,connect to tes ;
Grant succeeded.

SQL> create table tes.t1 as select * from hr.employees ; 
Table created.

SQL> create index TES.IND_EMP_T1 on tes.t1(employee_id) tablespace sysaux ;
Index created.

SQL> create table tes.t2 as select * from dba_extents; 
Table created.

Step  4 :  Check Self-Contained Set of  Tablespaces  :
There may be logical or physical dependencies between objects in the transportable set and those outside of the set. We can only transport a set of  tablespaces that is self-contained . Some examples of self contained tablespace violations are:
  • An index inside the set of tablespaces is for a table outside of the set of tablespaces.
  • A partitioned table is partially contained in the set of tablespaces.
  • A referential integrity constraint points to a table across a set boundary.


The statement below can be used to determine whether Tablespace are self-contained or not : 
SQL> conn sys as sysdba
Enter password: 
Connected.

SQL> EXECUTE dbms_tts.transport_set_check('TES_TBS', TRUE, TRUE) ;
PL/SQL procedure successfully completed.

The  DBMS_TTS  package checks if the transportable set is self-contained. All violations are inserted into a temporary table that can be selected from the transport_set_violations view. 

SQL> select * from   transport_set_violations;
VIOLATIONS
----------------------------------------------------------------------------------------------------
Index TES.IND_EMP_T1 in tablespace SYSAUX points to table TES.T1 in tablespace TES_TBS

Since, there is violation ,so we manully  move the object to target tablespace .Here, we can rebuild the index and move it into tablespace 'TES_TBS' as 

SQL> alter  index  TES.IND_EMP_T1  rebuild  tablespace  tes_tbs ;
Index altered.

Again , run the dbms_tts package to check the violations .

SQL> EXECUTE dbms_tts.transport_set_check('TES_TBS', TRUE, TRUE);
PL/SQL procedure successfully completed.

SQL> select * from   transport_set_violations;
no rows selected

Now there is no violations. If there is any dependency object , then  we get  "ORA-29341: The transportable set is not self-contained"  error while exporting the tablespace .

Step 5 : Generate a Transportable Tablespace Set : 
After ensuring that we have a self-contained set of tablespaces that we want to transport, generate a transportable tablespace set by performing the following actions . Make the tablespaces read-only.

SQL> alter tablespace tes_tbs read only ;
Tablespace altered.

Now export the metadata of tablespace "tes_tbs"  as 

$ expdp system/xxxx  dumpfile=tes_tbs_exp.dmp  transport_tablespaces=tes_tbs  transport_full_check=y  logfile=tes_tbs_export.log

Export: Release 10.2.0.1.0 - Production on Friday, 20 January, 2012 14:03:53
Copyright (c) 2003, 2005, Oracle.  All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01":  system/******** dumpfile=tes_tbs_exp.dmp transport_tablespaces=tes_tbs transport_full_check=y logfile=tes_tbs_export.log 
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
  /home/oracle/product/10.2.0/db_1/rdbms/log/tes_tbs_exp.dmp
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 14:07:44

Step 6 : Copy the datafile and  dumpfile on target database 
Transport both the datafiles and the export file of the tablespaces to a place that is accessible to the target database.

Step 7 : Import the dumpfile in target database 
Before importing the dumpfile make sure that tablespace of same doesnot exist in the target database. Also check the user may exist which is having default tablespace i.e, "TES_TBS" . We can also use the remap_schema parameter to restore it into some other schemas.

C:\>impdp system/xxxx  dumpfile=tes_tbs_exp.dmp TRANSPORT_DATAFILES='C:\app\Neerajs\oradata\noida\tes_tbs.dbf' logfile=tes_imp.log

Import: Release 11.2.0.1.0 - Production on Fri Jan 20 14:38:14 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01":  system/******** dumpfile=tes_tbs_exp.dmp  TRANSPORT_DATAFILES='C:\app\Neerajs\oradata\noida\tes_tbs.dbf' logfile=tes_imp.log

Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at 14:38:48

Step 8 : Connect to target Database  

C:\>sqlplus sys/xxxx@noida as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Fri Jan 20 14:40:04 2012
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> alter tablespace tes_tbs read write ;
Tablespace altered.

SQL> conn tes/tes
Connected.

SQL> select  *  from tab;

TNAME       TABTYPE     CLUSTERID
----------     -----------     --------------
T1                TABLE
T2                TABLE

Here, we find the tablespace is successfully transported with all the tables. The transportable tablespace feature is useful in a number of scenarios, including:
  • Exporting and importing partitions in data warehousing tables 
  • Copying multiple read-only versions of a tablespace on multiple databases
  • Performing tablespace point-in-time-recovery (TSPITR) 
  • Migrating databases among RDBMS versions and OS platforms



Enjoy       J J J