Oracle transportable tablespaces are the fastest way for moving large volumes of data between two Oracle databases. Starting with Oracle Database 10g, we can transport tablespaces across platforms . Furthermore, like import and export, transportable tablespaces provide a mechanism for transporting metadata in addition to transporting data. Below are the steps to perform the cross platform transport tablespace .
Prerequisites :
There are few points which has to be considered before performing the Transportable tablespaces . The followings are :
1.) The source and target database must use the same character set and national character set .
2.) We cannot transport a tablespace to a target database in which a tablespace with the same name already exists. However, we can rename either the tablespace to be transported or the destination tablespace before the transport operation.
3.) Objects with underlying objects (such as materialized views) or contained objects (such as partitioned tables) are not transportable unless all of the underlying or contained objects are in the tablespace set.
4.) If the owners of tablespace objects does not exist on target database, the usernames need to be created manually before starting the transportable tablespace import. If we use spatial indexes, then:
- be aware that TTS across different endian platforms are not supported for spatial indexes in 10gR1 and 10gR2; such a limitation has been released in 11g .
- Specific Spatial packages must be run before exporting and after transportation, please see Oracle Spatial documentation.
5.) We cannot transport the SYSTEM tablespace or objects owned by the user SYS .
6.) Opaque Types (such as RAW, BFILE, and the AnyTypes) can be transported, but they are not converted as part of the cross-platform transport operation. Their actual structure is known only to the application, so the application must address any endianness issues after these types are moved to the new platform .
7.) Floating-Point Numbers BINARY_FLOAT and BINARY_DOUBLE types are transportable using Data Pump but not the original export utility, EXP.
If we are migrating a database, then make sure there are no invalid objects in the source database before making the export. Take a full no rows export to recreate objects that won't be transported with TTS or remove all the invalid objects by running utlrp.sql scripts . Keep the source database viable until we have determined all objects are in the target database and there are no issues (i.e. the target database has been thoroughly checked out ).
Here we are performing the cross platform transport tablespace demo .The details about the sorce and target are as below :
Source :
OS = Redhat Linux (32 bit)
Oracle Version = Oracle 10.2.0
Database Name = comcast
Tablespace = TES_TBS (which is to be transported)
Target :
OS = Microsoft Window (32 bit)
Oracle Version = Oracle 11.2.0
Database Name = noida
Step 1 : Determine Platforms Support and Endianness (on source) :
This step is only necessary if we are transporting the tablespace set to a platform different from the source platform. Determine whether cross-platform tablespace transport is supported for both the source and target platforms, and also determine the endianness of each platform . We can query the v$transportable_platform view to see the platforms that are supported, and to determine each platform's endian format (byte ordering).
Now check the endians by below query:
Source endians :
SQL> select d.platform_name, endian_format from v$transportable_platform tp , v$database d where tp.PLATFORM_NAME = d.PLATFORM_NAME ;
PLATFORM_NAME ENDIAN_FORMAT
----------------------- ----------------------
Linux IA (32-bit) Little
Target endians :
SQL> select d.platform_name,endian_format from v$transportable_platform tp , v$database d where
tp.PLATFORM_NAME = d.PLATFORM_NAME;
PLATFORM_NAME ENDIAN_FORMAT
----------------------------- ---------------------
Microsoft Windows IA (32-bit) Little
Here , we notice that the both source and target have same endians . If we have different endians then follow step 2 else skip it and move to step 3 .
Step 2 : Different endian formats ( skip step 2 if having same endians ) :
If the endian formats are different then a conversion is necessary for transporting the tablespace. For example, run the below command to convert the tablespace of Source(Linux 64 bit) to Target(Solaris 64 bit) platform .
i.> Using CONVERT Tablespace FROM PLATFORM on Source host
RMAN> convert tablespace test_user_tbs
2> to platform ‘Solaris[tm] OE (64-bit)'
3> format='/oradata/rman_backups/%N_%f' ;
The data file “test_user_tbs01.dbf” is not touched and a new file will be created for Solaris platform under “/oradata/rman_backups“and copy the file to Target platform. Use RMAN's CONVERT command to convert the datafiles to be transported to the destination host's format. The converted datafiles are stored in “/solaris/oradata”.
ii.> Using CONVERT DATAFILE... FROM PLATFORM on Destination host
RMAN> convert datafile test_user_tbs01.dbf
2> from platform ‘Linux IA (64-bit)'
3> db_file_name_convert ‘/linux/oradata/’ ‘/solaris/oradata’
Let's have a demo of the transportable tablespace .
Step 3 : Create a tablespace and user :
Here we will create the Tablespace and User on the source database and this tablespace will be transported further .
SQL> create tablespace tes_tbs datafile '/home/oracle/oradata/comcast/tes_tbs.dbf' size 100m ;
Tablespace created
SQL> create user tes identified by tes
2 default tablespace tes_tbs
3 quota unlimited on tes_tbs ;
User created.
SQL> grant resource,connect to tes ;
Grant succeeded.
SQL> create table tes.t1 as select * from hr.employees ;
Table created.
SQL> create index TES.IND_EMP_T1 on tes.t1(employee_id) tablespace sysaux ;
Index created.
SQL> create table tes.t2 as select * from dba_extents;
Table created.
Step 4 : Check Self-Contained Set of Tablespaces :
There may be logical or physical dependencies between objects in the transportable set and those outside of the set. We can only transport a set of tablespaces that is self-contained . Some examples of self contained tablespace violations are:
- An index inside the set of tablespaces is for a table outside of the set of tablespaces.
- A partitioned table is partially contained in the set of tablespaces.
- A referential integrity constraint points to a table across a set boundary.
The statement below can be used to determine whether Tablespace are self-contained or not : SQL> conn sys as sysdba
Enter password:
Connected.
SQL> EXECUTE dbms_tts.transport_set_check('TES_TBS', TRUE, TRUE) ;
PL/SQL procedure successfully completed.
The DBMS_TTS package checks if the transportable set is self-contained. All violations are inserted into a temporary table that can be selected from the transport_set_violations view.
SQL> select * from transport_set_violations;
VIOLATIONS
----------------------------------------------------------------------------------------------------
Index TES.IND_EMP_T1 in tablespace SYSAUX points to table TES.T1 in tablespace TES_TBS
Since, there is violation ,so we manully move the object to target tablespace .Here, we can rebuild the index and move it into tablespace 'TES_TBS' as
SQL> alter index TES.IND_EMP_T1 rebuild tablespace tes_tbs ;
Index altered.
Again , run the dbms_tts package to check the violations .
SQL> EXECUTE dbms_tts.transport_set_check('TES_TBS', TRUE, TRUE);
PL/SQL procedure successfully completed.
SQL> select * from transport_set_violations;
no rows selected
Now there is no violations. If there is any dependency object , then we get "ORA-29341: The transportable set is not self-contained" error while exporting the tablespace .
Step 5 : Generate a Transportable Tablespace Set :
After ensuring that we have a self-contained set of tablespaces that we want to transport, generate a transportable tablespace set by performing the following actions . Make the tablespaces read-only.
SQL> alter tablespace tes_tbs read only ;
Tablespace altered.
Now export the metadata of tablespace "tes_tbs" as
$ expdp system/xxxx dumpfile=tes_tbs_exp.dmp transport_tablespaces=tes_tbs transport_full_check=y logfile=tes_tbs_export.log
Export: Release 10.2.0.1.0 - Production on Friday, 20 January, 2012 14:03:53
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01": system/******** dumpfile=tes_tbs_exp.dmp transport_tablespaces=tes_tbs transport_full_check=y logfile=tes_tbs_export.log
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
/home/oracle/product/10.2.0/db_1/rdbms/log/tes_tbs_exp.dmp
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 14:07:44
Step 6 : Copy the datafile and dumpfile on target database
Transport both the datafiles and the export file of the tablespaces to a place that is accessible to the target database.
Step 7 : Import the dumpfile in target database
Before importing the dumpfile make sure that tablespace of same doesnot exist in the target database. Also check the user may exist which is having default tablespace i.e, "TES_TBS" . We can also use the remap_schema parameter to restore it into some other schemas.
C:\>impdp system/xxxx dumpfile=tes_tbs_exp.dmp TRANSPORT_DATAFILES='C:\app\Neerajs\oradata\noida\tes_tbs.dbf' logfile=tes_imp.log
Import: Release 11.2.0.1.0 - Production on Fri Jan 20 14:38:14 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01": system/******** dumpfile=tes_tbs_exp.dmp TRANSPORT_DATAFILES='C:\app\Neerajs\oradata\noida\tes_tbs.dbf' logfile=tes_imp.log
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at 14:38:48
Step 8 : Connect to target Database
C:\>sqlplus sys/xxxx@noida as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Fri Jan 20 14:40:04 2012
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> alter tablespace tes_tbs read write ;
Tablespace altered.
SQL> conn tes/tes
Connected.
SQL> select * from tab;
TNAME TABTYPE CLUSTERID
---------- ----------- --------------
T1 TABLE
T2 TABLE
Here, we find the tablespace is successfully transported with all the tables. The transportable tablespace feature is useful in a number of scenarios, including:
- Exporting and importing partitions in data warehousing tables
- Copying multiple read-only versions of a tablespace on multiple databases
- Performing tablespace point-in-time-recovery (TSPITR)
- Migrating databases among RDBMS versions and OS platforms
Enjoy J J J