Saturday, October 29, 2011

Very good summary of DELETE TRUNCATE and DROP commands

Hello everyone,

Today i came across a very good explanation of  DELETE TRUNCATE and DROP commands.


Enjoy:-)

Tuesday, October 11, 2011

What is LOGGING,NOLOGGING and FORCE LOGGING?


Oracle gives us the ability to limit redo generation on tables and indexes by setting them in NOLOGGING mode.

NOLOGGING affect the recoverability. Before going into how to limit the redo generation, it is important to clear the misunderstanding that NOLOGGING is the way out of redo generation, there are some points regarding it:

NOLOGGING is designed to handle bulk inserts of data which can be easy reproduced.

Regardless of LOGGING status, writing to undo blocks causes generation of redo.
LOGGING should not be disabled on a primary database if it has one or more standby databases. For this reason oracle introduced the ALTER DATABASE FORCE LOGGING command in Oracle 9i R2. (Means that the NOLOGGING attribute will not have any effect on the segments) If the database is in FORCE LOGGING MODE. NOLOGGING can be also override at tablespace level using ALTER TABLESPACE … FORCE LOGGING.

FORCE LOGGING can be used on tablespace or database level to force logging of changes to the redo. This may be required for sites that are mining log data, using Oracle Streams or using Data Guard (standby databases).

Enjoy:-)

Friday, September 30, 2011

How To Check Whether Physical Standby is in Sync with the Primary or Not?


The following tips will help you in monitoring sync between primary and standby databases.

1. Check for GAP on standby
2. Check redo received on standby
3. Check redo applied on standby

Solution:

Execute following queries:

A. On Primary

SQL> SELECT THREAD# "Thread",SEQUENCE# "Last Sequence Generated"
FROM V$ARCHIVED_LOG
WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)
ORDER BY 1;

Thread     Last Sequence Generated
---------- -----------------------
1          19
2          13
3          11

B. On Physical Standby

SQL> SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference"
FROM
(SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
(SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
WHERE
ARCH.THREAD# = APPL.THREAD#
ORDER BY 1;

Thread     Last Sequence Received Last Sequence Applied Difference
---------- ---------------------- --------------------- ----------
1          19                     19                    0
2          13                     13                    0
3          11                     11                    0


C. On Physical Standby

SQL> SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;

no rows selected

Now perform following checks:

1. Check for GAP

If query “C” returns any row then this means there are some archive log missing on standby.
Example:

SQL> SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;

Thread     Low Sequence High Sequence
---------- ------------ -------------
1          8            9

This example shows sequence 8 and 9 from thread 1 are missing on standby, Hence standby is not in sync with the primary.
If query “C” does not returns any row and output is “no row selected” than this means there is no archive gap on standby.

2. Check for redo received on standby

Compare value of “Last Sequence Generated” in query “A” with “Last Sequence Received” in query “B” for all threads.

If both values are same than this means that standby has received the last sequence generated on primary.
If both values are not same then there are some archives missing on standby, Hence standby is not in sync with the primary.

enjoy:-)

Database Switchover


A database can be in one of two mutually exclusive modes (primary or standby). These roles can be altered at runtime without loss of data or resetting of redo logs. This process is known as a Switchover and can be performed using the following statements..

-- Convert primary database to standby

SQL>ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY;

ERROR at line 1:
ORA-01093: ALTER DATABASE CLOSE only permitted with no sessions connected

if the above error occurred, then try to shut the database then startup then try again.

-If still the same error persist,do the following in sequence: -

SQL> SHOW PARAMETER JOB_QUEUE_PROCESSES; (value for this should be 0)

SQL> SELECT SID, PROCESS, PROGRAM FROM V$SESSION;

SQL> alter database commit to switchover to PHYSICAL STANDBY WITH SESSION SHUTDOWN;

-- Shutdown primary database
SHUTDOWN IMMEDIATE;

-- Mount old primary database as standby database
STARTUP NOMOUNT;
ALTER DATABASE MOUNT STANDBY DATABASE;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

On the standby database issue the following commands: -

-- Convert standby database to primary
CONNECT / AS SYSDBA
ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;



SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY
*
ERROR at line 1:
ORA-16139: media recovery required


if the above error occurred then execute the below commands:


SQL> alter database recover managed standby database finish;


SQL> alter database commit to switchover to physical primary with session shutdown;

-- Shutdown standby database
SHUTDOWN IMMEDIATE;

-- Open old standby database as primary
STARTUP;

enjoy:-)

Defining the time interval at which redo log wil apply to standby:


Here in this post we will learn how to set time interval of redo log application to standby.


You can set a delay between the arrival of the archived redo log and it being applied on the standby server using the following commands.

On standby server: 

-ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

-ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DELAY 30 DISCONNECT FROM SESSION;

You may also changed the scenario as NODELAY means redo log will immediately applied using the below command:

-ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

-ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY DISCONNECT FROM SESSION;

enjoy:-)

Thursday, September 29, 2011

External tables with Datapump


External tables are largely used as a convenient way of moving data into and out of the database.They let you query data in a flat file as though the file were an Oracle table.

Oracle uses SQL*Loader functionality, through the ORACLE_LOADER access driver to move data from the flat file into the database; it uses a Data Pump access driver to move data out of the db into a file in an Oracle-proprietary format, and back into the database from files of that format

Real-Life use of External Tables:

Suppose that you receive a daily .csv file from a reporting department. Instead of writing a SQL*Loader script to import each day's .csv file into your database, you can simply create an external table and write an "insert ... select" SQL query to insert the data directly into your tables. Place the day's CSV file in the location specified in the external table definition, run the query, and you're done.

Lets see how its done:

Choose or create a OS directory where you will put your .CSV(..flat file).

In my case i have taken : 'D:\et\'

My CSV file will look like this having name "emp.CSV"

001,Hutt,Jabba,896743856,jabba@thecompany.com,18
002,Simpson,Homer,382947382,homer@thecompany.com,20
003,Kent,Clark,082736194,superman@thecompany.com,5
004,Kid,Billy,928743627,billythkid@thecompany.com,9
005,Stranger,Perfect,389209831,nobody@thecompany.com,23

The next step is to create this directories in Oracle, and grant read/write access on it to the Oracle user who will be creating the external table

SQL> create directory opump as 'D:\et\';

Directory created.

SQL> grant read,write on directory opump to gg;

Grant succeeded.

Now,create the external table correspond to .CSV file.

SQL> conn gg/gg
Connected.

SQL>  create table xtern_empl_rpt
  2          ( empl_id varchar2(3),
  3            last_name varchar2(50),
  4            first_name varchar2(50),
  5            ssn varchar2(9),
  6           email_addr varchar2(100),
  7            years_of_service number(2,0)
  8          )
  9          organization external
 10         ( default directory opump
 11           access parameters
 12           ( records delimited by newline
 13             fields terminated by ','
 14           )
 15           location ('emp.csv')
 16       );

Table created.

External table is successfully created now.Lets check the data in oracle!

SQL> sho user;
USER is "GG"
SQL> select * from xtern_empl_rpt ;



Note: If u add more records in the csv file i.e. emp.csv then oracle table "xtern_empl_rpt" will automatically updated.

For example i will add a new record in my csv file:

7,Vishu,DBA,123,itsmevishu82@gmail.com,25

Save the file..

Lets see the record if updated in oracle table.










Unloading data into an external file using DATAPUMP.

Oracle 10g lets you create a new external table from data in your database, which goes into a flat file pushed from the database using the ORACLE_DATAPUMP access driver. This flat file is in an Oracle-proprietary format that can be read by DataPump. The syntax is similar to the CREATE TABLE... ORGANIZATION EXTERNAL above, but simpler -- since you can't specify the data format, you can specify very few access_parameters. The key difference is that you must specify the access driver, ORACLE_DATAPUMP, since the access driver defaults to ORACLE_LOADER.

Lets see how its done..

First create a user called "mm" and provide the privileges as below.

SQL> create user mm identified by mm;

User created.

SQL> grant read,write on directory opump to mm;

Grant succeeded.

Now,connect to mm and create the dumpfile using external table.

SQL> conn mm/mm
Connected.

SQL> create table import_empl_info
  2      ( empl_id varchar2(3),
  3        last_name varchar2(50),
  4        first_name varchar2(50),
  5        ssn varchar2(9),
  6        birth_dt date
  7      )
  8      organization external
  9      ( type oracle_datapump
 10       default directory opump
 11       location ('empl_info_rpt.dmp')
 12     ) ;

Table created.

It will create the dumpfile called "empl_info_rpt.dmp" which can be moved and used in any other database or same.

IMPORT THE TABLE BACK IN AGAIN..

We will now again load the data into oracle from the dump being created through external table.

First drop the table "import_empl_info".

SQL> drop table import_empl_info;

Table dropped.

Now load the data again ..

SQL> create table import_empl_info
  2      ( empl_id varchar2(3),
  3        last_name varchar2(50),
  4        first_name varchar2(50),
  5        ssn varchar2(9)
  6        )
  7      organization external
  8      ( type oracle_datapump
  9       default directory opump
 10       location ('empl_info_rpt.dmp')
 11     ) ;

We have seen now how External tables in 9i and 10g provide a convenient, seamless way to move data in and out of the database

Restrictions imposed on External tables:

1.)External are not usable in many ways regular Oracle tables.
2.)You cannot perform any DML operations on external tables other than table creation.
3.)You can't create an index on an external table. 

Enjoy:-)

Wednesday, September 28, 2011

Remap data in datapump


Remap data lets you export the data and modify the data based on a remapping scheme. The remap_data parameter uses a user defined package/function to alter the data.

DEMO:-

Create user "gg"

SQL> create user gg identified by gg;

User created.

SQL> grant dba to gg;

Grant succeeded.

SQL> conn gg/gg@delhi
Connected.

Create table "dp_test" in "gg"

SQL> create table dp_test (name varchar2(10),age number);

Table created.
SQL> insert into dp_test values('&a',&25);
Enter value for a: a
Enter value for 25: 10
old   1: insert into dp_test values('&a',&25)
new   1: insert into dp_test values('a',10)

1 row created.

SQL> /
Enter value for a: b
Enter value for 25: 15
old   1: insert into dp_test values('&a',&25)
new   1: insert into dp_test values('b',15)

1 row created.

SQL> /
Enter value for a: c
Enter value for 25: 22
old   1: insert into dp_test values('&a',&25)
new   1: insert into dp_test values('c',22)

1 row created.

SQL> commit;

Commit complete.

SQL> select * from dp_test;

NAME              AGE
---------- ----------
a                  10
b                  15
c                  22


Create package: -

SQL>  create or replace package add_age as
  2   function modify_age (roll_in number) return number;
  3   end;
  4  /

Package created.

SQL>  create or replace package body gg.add_age as
  2   function modify_age (roll_in number) return number
  3   as
  4   roll_out number;
  5   begin
  6   roll_out := roll_in + 10;
  7   return(roll_out);
  8   end;
  9   end;
 10
 11  /

Package body created.

Create another user called "scott1".


SQL> create user scott1 identified by tiger1;

User created.

SQL> grant dba to scott1;

Grant succeeded.

Take export of table "dp_test".

SQL> host expdp gg/gg@delhi dumpfile=dp_test.dmp logfile=dp_test.log tables=gg.dp_test remap_data=gg.dp_test.age:gg.add_age.modify_age

Export: Release 11.2.0.1.0 - Production on Wed Sep 28 16:19:47 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "GG"."SYS_EXPORT_TABLE_01":  gg/********@delhi dumpfile=dp_test.dmp logfile=dp_test.log tables=gg.dp_test remap_data=gg.dp_test.age:gg.add_age.modify_age
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "GG"."DP_TEST"                              5.429 KB       3 rows
Master table "GG"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for GG.SYS_EXPORT_TABLE_01 is:
  D:\APP\BISHWANATH\ADMIN\DELHI\DPDUMP\DP_TEST.DMP
Job "GG"."SYS_EXPORT_TABLE_01" successfully completed at 16:19:58


Now, import table "dp_test" in scott1 schema.

SQL> host impdp scott1/tiger1  dumpfile=dp_test.dmp logfile=dp_testimp.log remap_schema=gg:scott1 
Import: Release 11.2.0.1.0 - Production on Wed Sep 28 16:21:33 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SCOTT1"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SCOTT1"."SYS_IMPORT_FULL_01":  scott1/******** dumpfile=dp_test.dmp logfile=dp_testimp.log remap_schema=gg:scott1 table_exists_action=append
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SCOTT1"."DP_TEST"                          5.429 KB       3 rows
Job "SCOTT1"."SYS_IMPORT_FULL_01" successfully completed at 16:21:37


SQL> conn scott1/tiger1
Connected.

Check the data of "dp_test"

SQL> select * from dp_test;

NAME              AGE
---------- ----------
a                  20
b                  25
c                  32


As you see values in column "AGE" is incremented by 10.

Enjoy:-)

Advantages and Disadvantages of Datapump


Advantages:

===> Better control on the job running – it provides features like start,  stop, restart

===> Improved performance because of It is server side technology with parallel streams option

===> Using the parallel streams option, data pump can backup large volume of data quickly

===> Data pump is 15-50% faster than the conventional export import.

===> It has the ability to estimate the job times

===> Failed jobs can be restarted

===> Using exclude/include option we can perform fine-grained object selection

===> Backup jobs can be monitored

===> It has the remapping capabilities

===> It supports the export/import operations over the network. The NETWORK_LINK parameter initiate the export using a 
database link

===> Using “Query” parameter DBA can extract the data from tables like SELECT

“Content” parameter gives the flexibility for what to import/export. For example Metadata only, data or both

===> It supports full range of data types

===> It supports cross platform compatibility

===> No need to specify the buffer size like in normal exp/imp

===> It has its own performace tuning features

===> V$session_longops view can be used for the time estimation for the data pump jobs

===> It supports interactive mode that allows the dba to monitor or interact with ongoing jobs

===> Dumps will be in compressed

===> Data can be encrypted

===> XML schemas and XML type is supported by the Data Pump

Disadvantages:

===> Export cannot be taken to tape

===> Import will work only with Oracle 10g or above

===> Cannot use with Unix pipes

Related Views:

DBA_DATAPUMP_JOBS
USER_DATAPUMP_JOBS
DBA_DIRECTORIES
DATABASE_EXPORT_OBJECTS
SCHEMA_EXPORT_OBJECTS
TABLE_EXPORT_OBJECTS
DBA_DATAPUMP_SESSIONS

Enjoy:-)

Difference between Conventional path and Direct Path


Conventional path means that a SQL INSERT statement is used to load data into tables. Direct path deals with data much faster than conventional path. Direct path achieves this performance gain by inserting data directly, bypassing the SQL command processing layer and saves on data copies whenever possible.

This means you don't get any redo created however and once your load is complete you should backup immediately in order to have your data safe.

Lets elaborate them!

--Conventional path Export. 

Conventional path Export uses the SQL SELECT statement to extract data from tables. Data is read from disk into the buffer cache, and rows are transferred to the evaluating buffer. The data, after passing expression evaluation, is transferred to the Export client, which then writes the data into the export file. 

--Direct path Export. 

When using a Direct path Export, the data is read from disk directly into the export session's program global area (PGA): the rows are transferred directly to the Export session's private buffer. This also means that the SQL command-processing layer (evaluation buffer) can be bypassed, because the data is already in the format that Export expects. As a result, unnecessary data conversion is avoided. The data is transferred to the Export client, which then writes the data into the export file. 

Enjoy:-)

Datapump enhancements in Oracle 11g


Below are some enhancements done in Oracle 11g:

-Compression: parameter now can use to compress both data and Metadata with the help of values "ALL".

expdp quest/quest DIRECTORY= quest_pump_dir DUMPFILE=quest_comp.dmp compression=ALL

-Data Pump Encryption Enhancements:

Different encryption modes are available during the export. These include Dual, Password, and Transparent. The mode that is used is set using the encryption_mode parameter. 
Here is an example of using the new encryption parameters for a data pump export:

expdp quest/quest DIRECTORY=quest_pump_dir DUMPFILE=quest_comp.dmp encryption=all encryption_password=not_quest encryption_algorithm=AES128 encryption_mode=PASSWORD

-Data Pump Rename Table(or remap_table)

Sometimes its desirable to be able to rename a table during the import process. In Oracle 11g you can now do this with Oracle Data Pump. To rename a table during the import process use the remap_table parameter. Using the remap_table parameter define the original table schema and name, and then indicate the new name for the table. Here is an example where the table QUEST.NAMES is renamed to COPY_NAMES:

impdp quest/quest DIRECTORY=quest_pump_dir DUMPFILE=quest.dmp tables=QUEST.NAMES remap_table=QUEST.NAMES:COPY_NAMES

-Data Pump and Partitioned Tables:

If you want to have some control over partitioning of tables during a Oracle Data Pump import you can use the partition_options parameter which is a new feature available in impdp in Oracle 11g. The following optioins are available when using the partition_options parameter:
None - Tables will be imported such that they will look like those on the system on which the export was created.
  Departition - Partitions will be created as individual tables rather than partitions of a partitioned table.

  Merge - Causes all partitions to be merged into one, unpartitioned table. 

In this example we use the partition_options parameter to take all partitions of the NAMES table and combine them into one unpartitioned table on the destination side:

impdp quest/quest DIRECTORY=quest_pump_dir DUMPFILE=quest.dmp tables=QUEST.NAMES partition_options=merge

-Overwrite Dump Files

When using impdp in Oracle 11g you can now overwrite any dump file that might already exist by using the new reuse_dumpfiles parameter as seen in this example:

expdp quest/quest DIRECTORY=quest_pump_dir DUMPFILE=quest.dmp tables=QUEST.NAMES reuse_dumpfiles

Data Pump Data_Options Parameter

Have you ever loaded records into a table with Oracle Data Pump Import, only to have the entire load fail because a few duplicate records were in the import file. Assume that your EMP table has three rows in it. EMPNO is the primary key and here is what a query of the table looks like:

SQL> select empno from emp;
     EMPNO
----------
      7369
      8000
      8001

3 rows selected.

When you try to load your Data Pump Export file the load fails because it has these three rows in it and they will violate the primary key of the table. The load and the resulting error messages would look like this:

C:\oracle\product\11.1.0\db_1\NETWORK\ADMIN>impdp scott/robert dumpfile=new_emp.dmp directory=data_pump_dir tables=emp  table_exists_action=append

Import: Release 11.1.0.6.0 - Production on Tuesday, 01 January, 2008 23:51:32
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SCOTT"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "SCOTT"."SYS_IMPORT_TABLE_01":  scott/******** dumpfile=new_emp.dmp directory=data_pump_dir tables=emp table_exists_action=
append
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39152: Table "SCOTT"."EMP" exists. Data will be appended to existing table but all dependent metadata will be skipped due to tab
le_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
ORA-31693: Table data object "SCOTT"."EMP" failed to load/unload and is being skipped due to error:
ORA-00001: unique constraint (SCOTT.PK_EMP) violated
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/USER_PREF_STATISTICS
Job "SCOTT"."SYS_IMPORT_TABLE_01" completed with 2 error(s) at 23:51:35
And the resulting table looks just the same:

SQL> select empno from emp;
     EMPNO
----------
      7369
      8000
      8001

3 rows selected.

The load of the EMP table has completely failed. Data Pump does not come with a reject limit clause like SQL*Loader does, so the whole load of the table just fails, even if it's just one record causing the problem.

Oracle 11g solves this problem with a new parameter for use with Data Pump Import. This parameter is data_options=skip_constraint_errors. When this parameter is used Data Pump Import will ignore the rows that generate a constraint error (while still not loading them of course) and will continue to load those rows that do not generate an error. Here is an example of an import using this new parameter:

SQL> select empno from emp;
     EMPNO
----------
      7369
      8000
      8001
3 rows selected.

C:\oracle\product\11.1.0\db_1\NETWORK\ADMIN>impdp scott/robert dumpfile=new_emp.dmp directory=data_pump_dir tables=emp  table_exists_action=append data_options=skip_constraint_errors

Import: Release 11.1.0.6.0 - Production on Tuesday, 01 January, 2008 23:55:33
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SCOTT"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "SCOTT"."SYS_IMPORT_TABLE_01":  scott/******** dumpfile=new_emp.dmp directory=data_pump_dir tables=emp table_exists_action=
append data_options=skip_constraint_errors
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39152: Table "SCOTT"."EMP" exists. Data will be appended to existing table but all dependent metadata will be skipped due to tab
le_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SCOTT"."EMP"                               8.187 KB      13 out of 16 rows
3 row(s) were rejected with the following error:
ORA-00001: unique constraint (SCOTT.PK_EMP) violated

Rejected rows with the primary keys are:
 Rejected row #1:
   column EMPNO: 8000
 Rejected row #2:
   column EMPNO: 8001
 Rejected row #3:
   column EMPNO: 7369
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/USER_PREF_STATISTICS
Job "SCOTT"."SYS_IMPORT_TABLE_01" completed with 1 error(s) at 23:55:36
Note in this case that 3 rows were rejected, but the remaining 13 rows were successfully loaded. Here is a select query against the table. This looks much better:

SQL> select empno from emp;
     EMPNO
----------
      7369
      7499
      7521
      7566
      7654
      7698
      7782
      7788
      7839
      7844
      7876
      7900
      7902
      7934
      8000
      8001

16 rows selected.

There is one main restriction with this feature and that is that the Data Pump import must use the external table method when loading data. Data Pump can use two methods of importing (or exporting) data, direct path and external tables. When you import using data_options=skip_constraint_errors Data Pump must use external tables. Normally this will happen as a default, so you don't have to worry about it. However, if you do get an error because Oracle Data Pump is not using the external table method you can use the undocumented parameter access_method=external_table to try to force Data Pump to use external tables.

The new Oracle Data Pump parameter data_options also provides for special handling of other types of data-related issues. Use the XML_CLOBS setting of the data_options parameter of the Data Pump Export utility to override the default behavior of Oracle Data Pump, which is to compress the format of the XMLType CLOB. If you use this option be aware that the XML schemas at the source and destination must be the same.

-The Transportable Parameter:

When the transportable parameter is used in impdp or expdp only the metadata associated with specific tables, partitions, or sub-partitions will be extracted, rather than all metadata. You can then proceed to transport the associated data files as you normally would. You export the data with expdp:
expdp quest/quert DIRECTORY=quest_pump_dir DUMPFILE=quest.dmp tables=QUEST.NAMES transportable=always
You then copy the data files and the dump file set to the destination and plug in the database. You would use impdp as a part of this process to import the metadata into the database, as seen in this example:

impdp quest/quert DIRECTORY=quest_pump_dir DUMPFILE=quest.dmp tables=QUEST.NAMES remap_schema=Robert:new_Robert

Enjoy:-)