• (240)-917-6758
  • richard@rkkoranteng.com

Faster Run-Time In Oracle 12c Data Pump Import

I remember when Data Pump was introduced in Oracle 10g database; it was ground-breaking at that time with all the cool features such as the ability to pause and resume an operation or even export/import an entire tablespace. Well if you thought that was exciting then hold on to your seat for what you are about to hear. Oracle 12c database allows you to suppress the generation of redo during a Data Pump import. 

Redo Generation And Conventional Data Pump

So here is the problem with 10g/11g Data Pump (conventional) import operations…THEY GENERATE ALOT OF REDO.  As the import is taking place the redo is being generated since it is a logical operation that needs log writer (LGWR) to perform I/O for redo flush. In some cases the redo generation would cause performance overhead and slow down the Data Pump import.  DBA’s used innovative and sometimes unsafe methods to overcome this challenge to speed up the import job:

  • utilize parallelism to maximize the use of the box
  • enable use_disable_logging to avoid LWGR performing I/O for redo flush
  • tell the database to behave in NOLOGGING mode

Even though some of the solutions listed above seemed clever, they were actually disastrous to the database environment in several ways:

  • parallelism relies on the CPU resources of the machine, hence if you do not have enough resource then you are just hurting performance of the box
  • Using undocumented parameters such as use_disable_logging without the blessing of Oracle support is not best practice. Also if your instance crashes when this parameter is enabled (during import) then your database could suffer serious damage/corruption and recovering will be a nightmare. Same headache applies if you decide to enable NOLOGGING.

12c Database Solution To Avoid Redo Generation During IMPDP

In Oracle 12c database we can utilize the TRANSFORM parameter of Data Pump import (impdp)  to temporarily disable logging for just that particular operation. Simply add the TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y argument to your impdp command/par file.  The default setting is “N”, which will obviously have no affect on logging behavior. Using a value “Y” reduces the logging associated with tables and indexes during the import by setting their logging attribute to NOLOGGING before the data is imported and resetting it to LOGGING once the operation is complete. Here are some key things to keep in mind for the nologging option for 12c Data Pump:

  • Redo logging for other operations still happens
  • Valid for both file mode imports and network mode imports
  • DISABLE_ARCHIVE_LOGGING option will not disable any logging: it has no effect if your database is running in FORCE LOGGING mode (for those who are concerned about Data Guard environments)

As I stated earlier, you can associate the DISABLE_ARCHIVE_LOGGING option with a table, index, or both.

Disable logging for table:

 TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y:TABLE 

Disable logging for index:

 TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y:INDEX 

12c database solutions to avoid redo generation during IMPDP

In the example below, I’m disabling logging for both tables and indexes in the Data Pump import operation

$ impdp system/welcome1 directory=MYDIR transform=disable_archive_logging:y

In the example below, I’m disabling logging for just the tables in the Data Pump import operation

$ impdp system/welcome1 directory=MYDIR transform=disable_archive_logging:y:table

In the example below, I’m disabling logging for just the indexes in the Data Pump import operation

$ impdp system/welcome1 directory=MYDIR transform=disable_archive_logging:y:index

Is it worth it?

One of my clients asked if the NOLOGGING feature of 12c Data Pump is worth the hype and my answer was YES.  I baselined it for my customer using a ~700MB dump file containing ~6 Million records. The results were outstanding.

Data Pump without disable_archive_logging (with redo generation)

  • took almost 5 minutes and generated 15 archive logs (online logs of 100MB in size)

Data Pump with nologging

  • took almost 3 minutes and no archivelog generation

Although my import was considerably small, I was still able to see the performance gain.  Note that the segment attributes are not permanently changed to NOLOGGING by the Data Pump import.  The comparison between the conventional Data Pump and new Data Pump shows a striking improvement: because the new Data Pump job has a faster run-time and it did not generate additional archivelogs. Test it out and let me know what you think.