.\" Automatically generated by Pod::Man 4.14 (Pod::Simple 3.42)
.\"
.\" Standard preamble:
.\" ========================================================================
.de Sp \" Vertical space (when we can't use .PP)
.if t .sp .5v
.if n .sp
..
.de Vb \" Begin verbatim text
.ft CW
.nf
.ne \\$1
..
.de Ve \" End verbatim text
.ft R
.fi
..
.\" Set up some character translations and predefined strings.  \*(-- will
.\" give an unbreakable dash, \*(PI will give pi, \*(L" will give a left
.\" double quote, and \*(R" will give a right double quote.  \*(C+ will
.\" give a nicer C++.  Capital omega is used to do unbreakable dashes and
.\" therefore won't be available.  \*(C` and \*(C' expand to `' in nroff,
.\" nothing in troff, for use with C<>.
.tr \(*W-
.ds C+ C\v'-.1v'\h'-1p'\s-2+\h'-1p'+\s0\v'.1v'\h'-1p'
.ie n \{\
.    ds -- \(*W-
.    ds PI pi
.    if (\n(.H=4u)&(1m=24u) .ds -- \(*W\h'-12u'\(*W\h'-12u'-\" diablo 10 pitch
.    if (\n(.H=4u)&(1m=20u) .ds -- \(*W\h'-12u'\(*W\h'-8u'-\"  diablo 12 pitch
.    ds L" ""
.    ds R" ""
.    ds C` ""
.    ds C' ""
'br\}
.el\{\
.    ds -- \|\(em\|
.    ds PI \(*p
.    ds L" ``
.    ds R" ''
.    ds C`
.    ds C'
'br\}
.\"
.\" Escape single quotes in literal strings from groff's Unicode transform.
.ie \n(.g .ds Aq \(aq
.el       .ds Aq '
.\"
.\" If the F register is >0, we'll generate index entries on stderr for
.\" titles (.TH), headers (.SH), subsections (.SS), items (.Ip), and index
.\" entries marked with X<> in POD.  Of course, you'll have to process the
.\" output yourself in some meaningful fashion.
.\"
.\" Avoid warning from groff about undefined register 'F'.
.de IX
..
.nr rF 0
.if \n(.g .if rF .nr rF 1
.if (\n(rF:(\n(.g==0)) \{\
.    if \nF \{\
.        de IX
.        tm Index:\\$1\t\\n%\t"\\$2"
..
.        if !\nF==2 \{\
.            nr % 0
.            nr F 2
.        \}
.    \}
.\}
.rr rF
.\"
.\" Accent mark definitions (@(#)ms.acc 1.5 88/02/08 SMI; from UCB 4.2).
.\" Fear.  Run.  Save yourself.  No user-serviceable parts.
.    \" fudge factors for nroff and troff
.if n \{\
.    ds #H 0
.    ds #V .8m
.    ds #F .3m
.    ds #[ \f1
.    ds #] \fP
.\}
.if t \{\
.    ds #H ((1u-(\\\\n(.fu%2u))*.13m)
.    ds #V .6m
.    ds #F 0
.    ds #[ \&
.    ds #] \&
.\}
.    \" simple accents for nroff and troff
.if n \{\
.    ds ' \&
.    ds ` \&
.    ds ^ \&
.    ds , \&
.    ds ~ ~
.    ds /
.\}
.if t \{\
.    ds ' \\k:\h'-(\\n(.wu*8/10-\*(#H)'\'\h"|\\n:u"
.    ds ` \\k:\h'-(\\n(.wu*8/10-\*(#H)'\`\h'|\\n:u'
.    ds ^ \\k:\h'-(\\n(.wu*10/11-\*(#H)'^\h'|\\n:u'
.    ds , \\k:\h'-(\\n(.wu*8/10)',\h'|\\n:u'
.    ds ~ \\k:\h'-(\\n(.wu-\*(#H-.1m)'~\h'|\\n:u'
.    ds / \\k:\h'-(\\n(.wu*8/10-\*(#H)'\z\(sl\h'|\\n:u'
.\}
.    \" troff and (daisy-wheel) nroff accents
.ds : \\k:\h'-(\\n(.wu*8/10-\*(#H+.1m+\*(#F)'\v'-\*(#V'\z.\h'.2m+\*(#F'.\h'|\\n:u'\v'\*(#V'
.ds 8 \h'\*(#H'\(*b\h'-\*(#H'
.ds o \\k:\h'-(\\n(.wu+\w'\(de'u-\*(#H)/2u'\v'-.3n'\*(#[\z\(de\v'.3n'\h'|\\n:u'\*(#]
.ds d- \h'\*(#H'\(pd\h'-\w'~'u'\v'-.25m'\f2\(hy\fP\v'.25m'\h'-\*(#H'
.ds D- D\\k:\h'-\w'D'u'\v'-.11m'\z\(hy\v'.11m'\h'|\\n:u'
.ds th \*(#[\v'.3m'\s+1I\s-1\v'-.3m'\h'-(\w'I'u*2/3)'\s-1o\s+1\*(#]
.ds Th \*(#[\s+2I\s-2\h'-\w'I'u*3/5'\v'-.3m'o\v'.3m'\*(#]
.ds ae a\h'-(\w'a'u*4/10)'e
.ds Ae A\h'-(\w'A'u*4/10)'E
.    \" corrections for vroff
.if v .ds ~ \\k:\h'-(\\n(.wu*9/10-\*(#H)'\s-2\u~\d\s+2\h'|\\n:u'
.if v .ds ^ \\k:\h'-(\\n(.wu*10/11-\*(#H)'\v'-.4m'^\v'.4m'\h'|\\n:u'
.    \" for low resolution devices (crt and lpr)
.if \n(.H>23 .if \n(.V>19 \
\{\
.    ds : e
.    ds 8 ss
.    ds o a
.    ds d- d\h'-1'\(ga
.    ds D- D\h'-1'\(hy
.    ds th \o'bp'
.    ds Th \o'LP'
.    ds ae ae
.    ds Ae AE
.\}
.rm #[ #] #H #V #F C
.\" ========================================================================
.\"
.IX Title "ORA2PG 1"
.TH ORA2PG 1 "2023-08-15" "perl v5.34.0" "User Contributed Perl Documentation"
.\" For nroff, turn off justification.  Always turn off hyphenation; it makes
.\" way too many mistakes in technical documents.
.if n .ad l
.nh
.SH "NAME"
Ora2Pg \- Oracle to PostgreSQL database schema converter
.SH "DESCRIPTION"
.IX Header "DESCRIPTION"
Ora2Pg is a free tool used to migrate an Oracle database to a PostgreSQL
compatible schema. It connects your Oracle database, scans it automatically
and extracts its structure or data, then generates \s-1SQL\s0 scripts that you can
load into your PostgreSQL database.
.PP
Ora2Pg can be used for anything from reverse engineering Oracle database to
huge enterprise database migration or simply replicating some Oracle data into
a PostgreSQL database. It is really easy to use and doesn't require any Oracle
database knowledge other than providing the parameters needed to connect to the
Oracle database.
.SH "FEATURES"
.IX Header "FEATURES"
Ora2Pg consist of a Perl script (ora2pg) and a Perl module (Ora2Pg.pm), the
only thing you have to modify is the configuration file ora2pg.conf by setting
the \s-1DSN\s0 to the Oracle database and optionally the name of a schema. Once that's
done you just have to set the type of export you want: \s-1TABLE\s0 with constraints,
\&\s-1VIEW, MVIEW, TABLESPACE, SEQUENCE, INDEXES, TRIGGER, GRANT, FUNCTION, PROCEDURE,
PACKAGE, PARTITION, TYPE, INSERT\s0 or \s-1COPY, FDW, QUERY, KETTLE, SYNONYM.\s0
.PP
By default Ora2Pg exports to a file that you can load into PostgreSQL with the
psql client, but you can also import directly into a PostgreSQL database by
setting its \s-1DSN\s0 into the configuration file. With all configuration options of
ora2pg.conf you have full control of what should be exported and how.
.PP
Features included:
.PP
.Vb 10
\&        \- Export full database schema (tables, views, sequences, indexes), with
\&          unique, primary, foreign key and check constraints.
\&        \- Export grants/privileges for users and groups.
\&        \- Export range/list partitions and sub partitions.
\&        \- Export a table selection (by specifying the table names).
\&        \- Export Oracle schema to a PostgreSQL 8.4+ schema.
\&        \- Export predefined functions, triggers, procedures, packages and
\&          package bodies.
\&        \- Export full data or following a WHERE clause.
\&        \- Full support of Oracle BLOB object as PG BYTEA.
\&        \- Export Oracle views as PG tables.
\&        \- Export Oracle user defined types.
\&        \- Provide some basic automatic conversion of PLSQL code to PLPGSQL.
\&        \- Works on any platform.
\&        \- Export Oracle tables as foreign data wrapper tables.
\&        \- Export materialized view.
\&        \- Show a  report of an Oracle database content.
\&        \- Migration cost assessment of an Oracle database.
\&        \- Migration difficulty level assessment of an Oracle database.
\&        \- Migration cost assessment of PL/SQL code from a file.
\&        \- Migration cost assessment of Oracle SQL queries stored in a file.
\&        \- Generate XML ktr files to be used with Penthalo Data Integrator (Kettle)
\&        \- Export Oracle locator and spatial geometries into PostGis.
\&        \- Export DBLINK as Oracle FDW.
\&        \- Export SYNONYMS as views.
\&        \- Export DIRECTORY as external table or directory for external_file extension.
\&        \- Dispatch a list of SQL orders over multiple PostgreSQL connections
\&        \- Perform a diff between Oracle and PostgreSQL database for test purpose.
\&        \- MySQL/MariaDB and Microsoft SQL Server migration.
.Ve
.PP
Ora2Pg does its best to automatically convert your Oracle database to PostgreSQL
but there's still manual works to do. The Oracle specific \s-1PL/SQL\s0 code generated
for functions, procedures, packages and triggers has to be reviewed to match
the PostgreSQL syntax. You will find some useful recommendations on porting
Oracle \s-1PL/SQL\s0 code to PostgreSQL \s-1PL/PGSQL\s0 at \*(L"Converting from other Databases
to PostgreSQL\*(R", section: Oracle (http://wiki.postgresql.org/wiki/Main_Page).
.PP
See http://ora2pg.darold.net/report.html for a \s-1HTML\s0 sample of an Oracle database
migration report.
.SH "INSTALLATION"
.IX Header "INSTALLATION"
All Perl modules can always be found at \s-1CPAN\s0 (http://search.cpan.org/). Just
type the full name of the module (ex: DBD::Oracle) into the search input box,
it will brings you the page for download.
.PP
Releases of Ora2Pg stay at \s-1SF\s0.net (https://sourceforge.net/projects/ora2pg/).
.PP
Under Windows you should install Strawberry Perl (http://strawberryperl.com/)
and the OSes corresponding Oracle clients. Since version 5.32 this Perl
distribution include pre-compiled driver of DBD::Oracle and DBD::Pg.
.SS "Requirement"
.IX Subsection "Requirement"
The Oracle Instant Client or a full Oracle installation must be installed on
the system. You can download the \s-1RPM\s0 from Oracle download center:
.PP
.Vb 4
\&    rpm \-ivh oracle\-instantclient12.2\-basic\-12.2.0.1.0\-1.x86_64.rpm
\&    rpm \-ivh oracle\-instantclient12.2\-devel\-12.2.0.1.0\-1.x86_64.rpm
\&    rpm \-ivh oracle\-instantclient12.2\-jdbc\-12.2.0.1.0\-1.x86_64.rpm
\&    rpm \-ivh oracle\-instantclient12.2\-sqlplus\-12.2.0.1.0\-1.x86_64.rpm
.Ve
.PP
or simply download the corresponding \s-1ZIP\s0 archives from Oracle download center
and install them where you want, for example: /opt/oracle/instantclient_12_2/
.PP
You also need a modern Perl distribution (perl 5.10 and more). To connect to a
database and proceed to his migration you need the \s-1DBI\s0 Perl module > 1.614.
To migrate an Oracle database you need the DBD::Oracle Perl modules to be
installed.
.PP
To install DBD::Oracle and have it working you need to have the Oracle client
libraries installed and the \s-1ORACLE_HOME\s0 environment variable must be defined.
.PP
If you plan to export a MySQL database you need to install the Perl module
DBD::MySQL which requires that the mysql client libraries are installed.
.PP
If you plan to export a \s-1SQL\s0 Server database you need to install the Perl module
\&\s-1DBD::ODBC\s0 which requires that the unixODBC package is installed.
.PP
On some Perl distribution you may need to install the Time::HiRes Perl module.
.PP
If your distribution doesn't include these Perl modules you can install them
using \s-1CPAN:\s0
.PP
.Vb 4
\&        perl \-MCPAN \-e \*(Aqinstall DBD::Oracle\*(Aq
\&        perl \-MCPAN \-e \*(Aqinstall DBD::MySQL\*(Aq
\&        perl \-MCPAN \-e \*(Aqinstall DBD::ODBC\*(Aq
\&        perl \-MCPAN \-e \*(Aqinstall Time::HiRes\*(Aq
.Ve
.PP
otherwise use the packages provided by your distribution.
.SS "Optional"
.IX Subsection "Optional"
By default Ora2Pg dumps export to flat files, to load them into your PostgreSQL
database you need the PostgreSQL client (psql). If you don't have it on the
host running Ora2Pg you can always transfer these files to a host with the psql
client installed. If you prefer to load export 'on the fly', the perl module
DBD::Pg is required.
.PP
Ora2Pg allows you to dump all output in a compressed gzip file, to do that you
need the Compress::Zlib Perl module or if you prefer using bzip2 compression,
the program bzip2 must be available in your \s-1PATH.\s0
.PP
If your distribution doesn't include these Perl modules you can install them
using \s-1CPAN:\s0
.PP
.Vb 2
\&        perl \-MCPAN \-e \*(Aqinstall DBD::Pg\*(Aq
\&        perl \-MCPAN \-e \*(Aqinstall Compress::Zlib\*(Aq
.Ve
.PP
otherwise use the packages provided by your distribution.
.SS "Instruction for \s-1SQL\s0 Server"
.IX Subsection "Instruction for SQL Server"
For \s-1SQL\s0 Server you need to install the unixodbc package and the Perl
\&\s-1DBD::ODBC\s0 driver:
.PP
.Vb 2
\&        sudo apt install unixodbc
\&        sudo apt install libdbd\-odbc\-perl
.Ve
.PP
or
.PP
.Vb 3
\&        sudo yum install unixodbc
\&        sudo yum install perl\-DBD\-ODBC
\&        sudo yum install perl\-DBD\-Pg
.Ve
.PP
then install the Microsoft \s-1ODBC\s0 Driver for \s-1SQL\s0 Server. Follow the instructions
relative to your operating system from here:
.PP
.Vb 1
\&        https://docs.microsoft.com/fr\-fr/sql/connect/odbc/linux\-mac/installing\-the\-microsoft\-odbc\-driver\-for\-sql\-server?view=sql\-server\-ver16
.Ve
.PP
Once it is done set the following in the /etc/odbcinst.ini file by adjusting
the \s-1SQL\s0 Server \s-1ODBC\s0 driver version:
.PP
.Vb 4
\&        [msodbcsql18]
\&        Description=Microsoft ODBC Driver 18 for SQL Server
\&        Driver=/opt/microsoft/msodbcsql18/lib64/libmsodbcsql\-18.0.so.1.1
\&        UsageCount=1
.Ve
.PP
See \s-1ORACLE_DSN\s0 to know how to use the driver to connect to your \s-1MSSQL\s0 database.
.SS "Installing Ora2Pg"
.IX Subsection "Installing Ora2Pg"
Like any other Perl Module Ora2Pg can be installed with the following commands:
.PP
.Vb 4
\&        tar xjf ora2pg\-x.x.tar.bz2
\&        cd ora2pg\-x.x/
\&        perl Makefile.PL
\&        make && make install
.Ve
.PP
This will install Ora2Pg.pm into your site Perl repository, ora2pg into
/usr/local/bin/ and ora2pg.conf into /etc/ora2pg/.
.PP
On Windows(tm) OSes you may use instead:
.PP
.Vb 2
\&        perl Makefile.PL
\&        gmake && gmake install
.Ve
.PP
This will install scripts and libraries into your Perl site installation
directory and the ora2pg.conf file as well as all documentation files
into C:\eora2pg\e
.PP
To install ora2pg in a different directory than the default one, simply
use this command:
.PP
.Vb 2
\&        perl Makefile.PL PREFIX=<your_install_dir>
\&        make && make install
.Ve
.PP
then set \s-1PERL5LIB\s0 to the path to your installation directory before using
Ora2Pg.
.PP
.Vb 2
\&        export PERL5LIB=<your_install_dir>
\&        ora2pg \-c config/ora2pg.conf \-t TABLE \-b outdir/
.Ve
.SS "Packaging"
.IX Subsection "Packaging"
If you want to build the binary package for your preferred Linux distribution
take a look at the packaging/ directory of the source tarball. There is
everything to build \s-1RPM,\s0 Slackware and Debian packages. See \s-1README\s0 file in
that directory.
.SS "Installing DBD::Oracle"
.IX Subsection "Installing DBD::Oracle"
Ora2Pg needs the Perl module DBD::Oracle for connectivity to an Oracle database
from perl \s-1DBI.\s0 To get DBD::Oracle get it from \s-1CPAN\s0 a perl module repository.
.PP
After setting \s-1ORACLE_HOME\s0 and \s-1LD_LIBRARY_PATH\s0 environment variables as root
user, install DBD::Oracle. Proceed as follow:
.PP
.Vb 3
\&        export LD_LIBRARY_PATH=/usr/lib/oracle/12.2/client64/lib
\&        export ORACLE_HOME=/usr/lib/oracle/12.2/client64
\&        perl \-MCPAN \-e \*(Aqinstall DBD::Oracle\*(Aq
.Ve
.PP
If you are running for the first time it will ask many questions; you can keep
defaults by pressing \s-1ENTER\s0 key, but you need to give one appropriate mirror
site for \s-1CPAN\s0 to download the modules. Install through \s-1CPAN\s0 manually if the
above doesn't work:
.PP
.Vb 9
\&        #perl \-MCPAN \-e shell
\&        cpan> get DBD::Oracle
\&        cpan> quit
\&        cd ~/.cpan/build/DBD\-Oracle*
\&        export LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib
\&        export ORACLE_HOME=/usr/lib/oracle/11.2/client64
\&        perl Makefile.PL
\&        make
\&        make install
.Ve
.PP
Installing DBD::Oracle require that the three Oracle packages: instant-client,
\&\s-1SDK\s0 and SQLplus are installed as well as the libaio1 library.
.PP
If you are using Instant Client from \s-1ZIP\s0 archives, the \s-1LD_LIBRARY_PATH\s0 and
\&\s-1ORACLE_HOME\s0 will be the same and must be set to the directory where you have
installed the files. For example: /opt/oracle/instantclient_12_2/
.SH "CONFIGURATION"
.IX Header "CONFIGURATION"
Ora2Pg configuration can be as simple as choosing the Oracle database to export
and choose the export type. This can be done in a minute.
.PP
By reading this documentation you will also be able to:
.PP
.Vb 6
\&        \- Select only certain tables and/or column for export.
\&        \- Rename some tables and/or column during export.
\&        \- Select data to export following a WHERE clause per table.
\&        \- Delay database constraints during data loading.
\&        \- Compress exported data to save disk space.
\&        \- and much more.
.Ve
.PP
The full control of the Oracle database migration is taken though a single
configuration file named ora2pg.conf. The format of this file consist in a
directive name in upper case followed by tab character and a value.
Comments are lines beginning with a #.
.PP
There's no specific order to place the configuration directives, they are
set at the time they are read in the configuration file.
.PP
For configuration directives that just take a single value, you can use them
multiple time in the configuration file but only the last occurrence found
in the file will be used. For configuration directives that allow a list
of value, you can use it multiple time, the values will be appended to the
list. If you use the \s-1IMPORT\s0 directive to load a custom configuration file,
directives defined in this file will be stores from the place the \s-1IMPORT\s0
directive is found, so it is better to put it at the end of the configuration
file.
.PP
Values set in command line options will override values from the configuration
file.
.SS "Ora2Pg usage"
.IX Subsection "Ora2Pg usage"
First of all be sure that libraries and binaries path include the Oracle
Instant Client installation:
.PP
.Vb 2
\&        export LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib
\&        export PATH="/usr/lib/oracle/11.2/client64/bin:$PATH"
.Ve
.PP
By default Ora2Pg will look for /etc/ora2pg/ora2pg.conf configuration file, if
the file exist you can simply execute:
.PP
.Vb 1
\&        /usr/local/bin/ora2pg
.Ve
.PP
or under Windows(tm) run ora2pg.bat file, located in your perl bin directory.
Windows(tm) users may also find a template configuration file in C:\eora2pg
.PP
If you want to call another configuration file, just give the path as command
line argument:
.PP
.Vb 1
\&        /usr/local/bin/ora2pg \-c /etc/ora2pg/new_ora2pg.conf
.Ve
.PP
Here are all command line parameters available when using ora2pg:
.PP
Usage: ora2pg [\-dhpqv \-\-estimate_cost \-\-dump_as_html] [\-\-option value]
.PP
.Vb 10
\&    \-a | \-\-allow str  : Comma separated list of objects to allow from export.
\&                        Can be used with SHOW_COLUMN too.
\&    \-b | \-\-basedir dir: Set the default output directory, where files
\&                        resulting from exports will be stored.
\&    \-c | \-\-conf file  : Set an alternate configuration file other than the
\&                        default /etc/ora2pg/ora2pg.conf.
\&    \-C | \-\-cdc_file file: File used to store/read SCN per table during export.
\&                        default: TABLES_SCN.log in the current directory. This
\&                        is the file written by the \-\-cdc_ready option.
\&    \-d | \-\-debug      : Enable verbose output.
\&    \-D | \-\-data_type str : Allow custom type replacement at command line.
\&    \-e | \-\-exclude str: Comma separated list of objects to exclude from export.
\&                        Can be used with SHOW_COLUMN too.
\&    \-h | \-\-help       : Print this short help.
\&    \-g | \-\-grant_object type : Extract privilege from the given object type.
\&                        See possible values with GRANT_OBJECT configuration.
\&    \-i | \-\-input file : File containing Oracle PL/SQL code to convert with
\&                        no Oracle database connection initiated.
\&    \-j | \-\-jobs num   : Number of parallel process to send data to PostgreSQL.
\&    \-J | \-\-copies num : Number of parallel connections to extract data from Oracle.
\&    \-l | \-\-log file   : Set a log file. Default is stdout.
\&    \-L | \-\-limit num  : Number of tuples extracted from Oracle and stored in
\&                        memory before writing, default: 10000.
\&    \-m | \-\-mysql      : Export a MySQL database instead of an Oracle schema.
\&    \-M | \-\-mssql      : Export a Microsoft SQL Server database.
\&    \-n | \-\-namespace schema : Set the Oracle schema to extract from.
\&    \-N | \-\-pg_schema schema : Set PostgreSQL\*(Aqs search_path.
\&    \-o | \-\-out file   : Set the path to the output file where SQL will
\&                        be written. Default: output.sql in running directory.
\&    \-p | \-\-plsql      : Enable PLSQL to PLPGSQL code conversion.
\&    \-P | \-\-parallel num: Number of parallel tables to extract at the same time.
\&    \-q | \-\-quiet      : Disable progress bar.
\&    \-r | \-\-relative   : use \eir instead of \ei in the psql scripts generated.
\&    \-s | \-\-source DSN : Allow to set the Oracle DBI datasource.
\&    \-S | \-\-scn    SCN : Allow to set the Oracle System Change Number (SCN) to
\&                        use to export data. It will be used in the WHERE clause
\&                        to get the data. It is used with action COPY or INSERT.
\&    \-t | \-\-type export: Set the export type. It will override the one
\&                        given in the configuration file (TYPE).
\&    \-T | \-\-temp_dir dir: Set a distinct temporary directory when two
\&                        or more ora2pg are run in parallel.
\&    \-u | \-\-user name  : Set the Oracle database connection user.
\&                        ORA2PG_USER environment variable can be used instead.
\&    \-v | \-\-version    : Show Ora2Pg Version and exit.
\&    \-w | \-\-password pwd : Set the password of the Oracle database user.
\&                        ORA2PG_PASSWD environment variable can be used instead.
\&    \-W | \-\-where clause : Set the WHERE clause to apply to the Oracle query to
\&                        retrieve data. Can be used multiple time.
\&    \-\-forceowner      : Force ora2pg to set tables and sequences owner like in
\&                  Oracle database. If the value is set to a username this one
\&                  will be used as the objects owner. By default it\*(Aqs the user
\&                  used to connect to the Pg database that will be the owner.
\&    \-\-nls_lang code: Set the Oracle NLS_LANG client encoding.
\&    \-\-client_encoding code: Set the PostgreSQL client encoding.
\&    \-\-view_as_table str: Comma separated list of views to export as table.
\&    \-\-estimate_cost   : Activate the migration cost evaluation with SHOW_REPORT
\&    \-\-cost_unit_value minutes: Number of minutes for a cost evaluation unit.
\&                  default: 5 minutes, corresponds to a migration conducted by a
\&                  PostgreSQL expert. Set it to 10 if this is your first migration.
\&   \-\-dump_as_html     : Force ora2pg to dump report in HTML, used only with
\&                        SHOW_REPORT. Default is to dump report as simple text.
\&   \-\-dump_as_csv      : As above but force ora2pg to dump report in CSV.
\&   \-\-dump_as_json     : As above but force ora2pg to dump report in JSON.
\&   \-\-dump_as_sheet    : Report migration assessment with one CSV line per database.
\&   \-\-init_project name: Initialise a typical ora2pg project tree. Top directory
\&                        will be created under project base dir.
\&   \-\-project_base dir : Define the base dir for ora2pg project trees. Default
\&                        is current directory.
\&   \-\-print_header     : Used with \-\-dump_as_sheet to print the CSV header
\&                        especially for the first run of ora2pg.
\&   \-\-human_days_limit num : Set the number of human\-days limit where the migration
\&                        assessment level switch from B to C. Default is set to
\&                        5 human\-days.
\&   \-\-audit_user list  : Comma separated list of usernames to filter queries in
\&                        the DBA_AUDIT_TRAIL table. Used only with SHOW_REPORT
\&                        and QUERY export type.
\&   \-\-pg_dsn DSN       : Set the datasource to PostgreSQL for direct import.
\&   \-\-pg_user name     : Set the PostgreSQL user to use.
\&   \-\-pg_pwd password  : Set the PostgreSQL password to use.
\&   \-\-count_rows       : Force ora2pg to perform a real row count in TEST,
\&                        TEST_COUNT and SHOW_TABLE actions.
\&   \-\-no_header        : Do not append Ora2Pg header to output file
\&   \-\-oracle_speed     : Use to know at which speed Oracle is able to send
\&                        data. No data will be processed or written.
\&   \-\-ora2pg_speed     : Use to know at which speed Ora2Pg is able to send
\&                        transformed data. Nothing will be written.
\&   \-\-blob_to_lo       : export BLOB as large objects, can only be used with
\&                        action SHOW_COLUMN, TABLE and INSERT.
\&   \-\-cdc_ready        : use current SCN per table to export data and register
\&                        them into a file named TABLES_SCN.log per default. It
\&                        can be changed using \-C | \-\-cdc_file.
\&   \-\-lo_import        : use psql \elo_import command to import BLOB as large
\&                        object. Can be use to import data with COPY and import
\&                        large object manually in a second pass. It is recquired
\&                        for BLOB > 1GB. See documentation for more explanation.
\&   \-\-mview_as_table str: Comma separated list of materialized views to export
\&                        as regular table.
\&   \-\-drop_if_exists   : Drop the object before creation if it exists.
\&   \-\-delete clause    : Set the DELETE clause to apply to the Oracle query to
\&                        be applied before importing data. Can be used multiple
\&                        time.
.Ve
.PP
See full documentation at https://ora2pg.darold.net/ for more help or see
manpage with 'man ora2pg'.
.PP
ora2pg will return 0 on success, 1 on error. It will return 2 when a child
process has been interrupted and you've gotten the warning message:
    \*(L"\s-1WARNING:\s0 an error occurs during data export. Please check what's happen.\*(R"
Most of the time this is an \s-1OOM\s0 issue, first try reducing \s-1DATA_LIMIT\s0 value.
.PP
For developers, it is possible to add your own custom option(s) in the Perl
script ora2pg as any configuration directive from ora2pg.conf can be passed
in lower case to the new Ora2Pg object instance. See ora2pg code on how to
add your own option.
.PP
Note that performance might be improved by updating stats on oracle:
.PP
.Vb 5
\&        BEGIN
\&        DBMS_STATS.GATHER_SCHEMA_STATS
\&        DBMS_STATS.GATHER_DATABASE_STATS 
\&        DBMS_STATS.GATHER_DICTIONARY_STATS
\&        END;
.Ve
.SS "Generate a migration template"
.IX Subsection "Generate a migration template"
The two options \-\-project_base and \-\-init_project when used indicate to ora2pg
that he has to create a project template with a work tree, a configuration
file and a script to export all objects from the Oracle database. Here a sample
of the command usage:
.PP
.Vb 10
\&        ora2pg \-\-project_base /app/migration/ \-\-init_project test_project
\&        Creating project test_project.
\&        /app/migration/test_project/
\&                schema/
\&                        dblinks/
\&                        directories/
\&                        functions/
\&                        grants/
\&                        mviews/
\&                        packages/
\&                        partitions/
\&                        procedures/
\&                        sequences/
\&                        synonyms/
\&                        tables/
\&                        tablespaces/
\&                        triggers/
\&                        types/
\&                        views/
\&                sources/
\&                        functions/
\&                        mviews/
\&                        packages/
\&                        partitions/
\&                        procedures/
\&                        triggers/
\&                        types/
\&                        views/
\&                data/
\&                config/
\&                reports/
\&
\&        Generating generic configuration file
\&        Creating script export_schema.sh to automate all exports.
\&        Creating script import_all.sh to automate all imports.
.Ve
.PP
It create a generic config file where you just have to define the Oracle
database connection and a shell script called export_schema.sh. The sources/
directory will contains the Oracle code, the schema/ will contains the code
ported to PostgreSQL. The reports/ directory will contains the html reports
with the migration cost assessment.
.PP
If you want to use your own default config file, use the \-c option to give
the path to that file. Rename it with .dist suffix if you want ora2pg to
apply the generic configuration values otherwise, the configuration file
will be copied untouched.
.PP
Once you have set the connection to the Oracle Database you can execute the
script export_schema.sh that will export all object type from your Oracle
database and output \s-1DDL\s0 files into the schema's subdirectories. At end of the
export it will give you the command to export data later when the import of
the schema will be done and verified.
.PP
You can choose to load the \s-1DDL\s0 files generated manually or use the second
script import_all.sh to import those file interactively. If this kind of
migration is not something current for you it's recommended you to use those
scripts.
.SS "Oracle database connection"
.IX Subsection "Oracle database connection"
There's 5 configuration directives to control the access to the Oracle database.
.IP "\s-1ORACLE_HOME\s0" 4
.IX Item "ORACLE_HOME"
Used to set \s-1ORACLE_HOME\s0 environment variable to the Oracle libraries required
by the DBD::Oracle Perl module.
.IP "\s-1ORACLE_DSN\s0" 4
.IX Item "ORACLE_DSN"
This directive is used to set the data source name in the form standard \s-1DBI DSN.\s0
For example:
.Sp
.Vb 1
\&        dbi:Oracle:host=oradb_host.myhost.com;sid=DB_SID;port=1521
.Ve
.Sp
or
.Sp
.Vb 1
\&        dbi:Oracle:DB_SID
.Ve
.Sp
On 18c this could be for example:
.Sp
.Vb 1
\&        dbi:Oracle:host=192.168.1.29;service_name=pdb1;port=1521
.Ve
.Sp
for the second notation the \s-1SID\s0 should be declared in the well known file
\&\f(CW$ORACLE_HOME\fR/network/admin/tnsnames.ora or in the path given to the \s-1TNS_ADMIN\s0
environment variable.
.Sp
For MySQL the \s-1DSN\s0 will lool like this:
.Sp
.Vb 1
\&        dbi:mysql:host=192.168.1.10;database=sakila;port=3306
.Ve
.Sp
the 'sid' part is replaced by 'database'.
.Sp
For \s-1MS SQL\s0 Server it will look like this:
.Sp
.Vb 1
\&        dbi:ODBC:driver=msodbcsql18;server=mydb.database.windows.net;database=testdb;TrustServerCertificate=yes
.Ve
.IP "\s-1ORACLE_USER\s0 et \s-1ORACLE_PWD\s0" 4
.IX Item "ORACLE_USER et ORACLE_PWD"
These two directives are used to define the user and password for the Oracle
database connection. Note that if you can it is better to login as Oracle super
admin to avoid grants problem during the database scan and be sure that nothing
is missing.
.Sp
If you do not supply a credential with \s-1ORACLE_PWD\s0 and you have installed the
Term::ReadKey Perl module, Ora2Pg will ask for the password interactively. If
\&\s-1ORACLE_USER\s0 is not set it will be asked interactively too.
.Sp
To connect to a local \s-1ORACLE\s0 instance with connections \*(L"as sysdba\*(R" you have to
set \s-1ORACLE_USER\s0 to \*(L"/\*(R" and an empty password.
.Sp
To make a connection using an Oracle Secure External Password Store (\s-1SEPS\s0), 
first configure the Oracle Wallet and then set both the \s-1ORACLE_USER\s0 and 
\&\s-1ORACLE_PWD\s0 directives to the special value of \*(L"_\|_SEPS_\|_\*(R" (without the quotes 
but with the double underscore).
.IP "\s-1USER_GRANTS\s0" 4
.IX Item "USER_GRANTS"
Set this directive to 1 if you connect the Oracle database as simple user and
do not have enough grants to extract things from the \s-1DBA_...\s0 tables. It will
use tables \s-1ALL_...\s0 instead.
.Sp
Warning: if you use export type \s-1GRANT,\s0 you must set this configuration option
to 0 or it will not work.
.IP "\s-1TRANSACTION\s0" 4
.IX Item "TRANSACTION"
This directive may be used if you want to change the default isolation level of
the data export transaction. Default is now to set the level to a serializable
transaction to ensure data consistency. The allowed values for this directive
are:
.Sp
.Vb 4
\&        readonly: \*(AqSET TRANSACTION READ ONLY\*(Aq,
\&        readwrite: \*(AqSET TRANSACTION READ WRITE\*(Aq,
\&        serializable: \*(AqSET TRANSACTION ISOLATION LEVEL SERIALIZABLE\*(Aq
\&        committed: \*(AqSET TRANSACTION ISOLATION LEVEL READ COMMITTED\*(Aq,
.Ve
.Sp
Releases before 6.2 used to set the isolation level to \s-1READ ONLY\s0 transaction
but in some case this was breaking data consistency so now default is set to
\&\s-1SERIALIZABLE.\s0
.IP "\s-1INPUT_FILE\s0" 4
.IX Item "INPUT_FILE"
This directive did not control the Oracle database connection or unless it
purely disables the use of any Oracle database by accepting a file as argument.
Set this directive to a file containing \s-1PL/SQL\s0 Oracle Code like function,
procedure or full package body to prevent Ora2Pg from connecting to an
Oracle database and just apply his conversion tool to the content of the
file. This can be used with the most of export types: \s-1TABLE, TRIGGER, PROCEDURE,
VIEW, FUNCTION\s0 or \s-1PACKAGE,\s0 etc.
.IP "\s-1ORA_INITIAL_COMMAND\s0" 4
.IX Item "ORA_INITIAL_COMMAND"
This directive can be used to send an initial command to Oracle, just after
the connection. For example to unlock a policy before reading objects or
to set some session parameters. This directive can be used multiple times.
.SS "Data encryption with Oracle server"
.IX Subsection "Data encryption with Oracle server"
If your Oracle Client config file already includes the encryption method,
then DBD:Oracle uses those settings to encrypt the connection while you
extract the data. For example if you have configured the Oracle Client
config file (sqlnet.ora or .sqlnet) with the following information:
.PP
.Vb 4
\&        # Configure encryption of connections to Oracle
\&        SQLNET.ENCRYPTION_CLIENT = required
\&        SQLNET.ENCRYPTION_TYPES_CLIENT = (AES256, RC4_256)
\&        SQLNET.CRYPTO_SEED = \*(Aqshould be 10\-70 random characters\*(Aq
.Ve
.PP
Any tool that uses the Oracle client to talk to the database will be
encrypted if you setup session encryption like above.
.PP
For example, Perl's \s-1DBI\s0 uses DBD-Oracle, which uses the Oracle client
for actually handling database communication. If the installation of
Oracle client used by Perl is setup to request encrypted connections,
then your Perl connection to an Oracle database will also be encrypted.
.PP
Full details at https://kb.berkeley.edu/jivekb/entry.jspa?externalID=1005
.SS "Testing connection"
.IX Subsection "Testing connection"
Once you have set the Oracle database \s-1DSN\s0 you can execute ora2pg to see if
it works:
.PP
.Vb 1
\&        ora2pg \-t SHOW_VERSION \-c config/ora2pg.conf
.Ve
.PP
will show the Oracle database server version. Take some time here to test your
installation as most problems take place here, the other configuration
steps are more technical.
.SS "Troubleshooting"
.IX Subsection "Troubleshooting"
If the output.sql file has not exported anything other than the Pg transaction
header and footer there's two possible reasons. The perl script ora2pg dump
an ORA-XXX error, that mean that your \s-1DSN\s0 or login information are wrong, check
the error and your settings and try again. The perl script says nothing and the
output file is empty: the user lacks permission to extract something from
the database. Try to connect to Oracle as super user or take a look at directive
\&\s-1USER_GRANTS\s0 above and at next section, especially the \s-1SCHEMA\s0 directive.
.IP "\s-1LOGFILE\s0" 4
.IX Item "LOGFILE"
By default all messages are sent to the standard output. If you give a file
path to that directive, all output will be appended to this file.
.SS "Oracle schema to export"
.IX Subsection "Oracle schema to export"
The Oracle database export can be limited to a specific Schema or Namespace,
this can be mandatory following the database connection user.
.IP "\s-1SCHEMA\s0" 4
.IX Item "SCHEMA"
This directive is used to set the schema name to use during export.
For example:
.Sp
.Vb 1
\&        SCHEMA  APPS
.Ve
.Sp
will extract objects associated to the \s-1APPS\s0 schema.
.Sp
When no schema name is provided and \s-1EXPORT_SCHEMA\s0 is enabled, Ora2Pg
will export all objects from all schema of the Oracle instance with
their names prefixed with the schema name.
.IP "\s-1EXPORT_SCHEMA\s0" 4
.IX Item "EXPORT_SCHEMA"
By default the Oracle schema is not exported into the PostgreSQL database and
all objects are created under the default Pg namespace. If you want to also
export this schema and create all objects under this namespace, set the
\&\s-1EXPORT_SCHEMA\s0 directive to 1. This will set the schema search_path at top of
export \s-1SQL\s0 file to the schema name set in the \s-1SCHEMA\s0 directive with the default
pg_catalog schema. If you want to change this path, use the directive \s-1PG_SCHEMA.\s0
.IP "\s-1CREATE_SCHEMA\s0" 4
.IX Item "CREATE_SCHEMA"
Enable/disable the \s-1CREATE SCHEMA SQL\s0 order at starting of the output file.
It is enable by default and concern on \s-1TABLE\s0 export type.
.IP "\s-1COMPILE_SCHEMA\s0" 4
.IX Item "COMPILE_SCHEMA"
By default Ora2Pg will only export valid \s-1PL/SQL\s0 code. You can force Oracle to
compile again the invalidated code to get a chance to have it obtain the valid
status and then be able to export it.
.Sp
Enable this directive to force Oracle to compile schema before exporting code.
When this directive is enabled and \s-1SCHEMA\s0 is set to a specific schema name,
only invalid objects in this schema will be recompiled. If \s-1SCHEMA\s0 is not set
then all schema will be recompiled. To force recompile invalid object in a
specific schema, set \s-1COMPILE_SCHEMA\s0 to the schema name you want to recompile.
.Sp
This will ask to Oracle to validate the \s-1PL/SQL\s0 that could have been invalidate
after a export/import for example. The '\s-1VALID\s0' or '\s-1INVALID\s0' status applies to
functions, procedures, packages and user defined types. It also concern disabled
triggers.
.IP "\s-1EXPORT_INVALID\s0" 4
.IX Item "EXPORT_INVALID"
If the above configuration directive is not enough to validate your \s-1PL/SQL\s0 code
enable this configuration directive to allow export of all \s-1PL/SQL\s0 code even if
it is marked as invalid. The '\s-1VALID\s0' or '\s-1INVALID\s0' status applies to functions,
procedures, packages and user defined types.
.IP "\s-1PG_SCHEMA\s0" 4
.IX Item "PG_SCHEMA"
Allow you to defined/force the PostgreSQL schema to use. By default if you set
\&\s-1EXPORT_SCHEMA\s0 to 1 the PostgreSQL search_path will be set to the schema name
exported set as value of the \s-1SCHEMA\s0 directive.
.Sp
The value can be a comma delimited list of schema name but not when using \s-1TABLE\s0
export type because in this case it will generate the \s-1CREATE SCHEMA\s0 statement
and it doesn't support multiple schema name. For example, if you set \s-1PG_SCHEMA\s0
to something like \*(L"user_schema, public\*(R", the search path will be set like this:
.Sp
.Vb 1
\&        SET search_path = user_schema, public;
.Ve
.Sp
forcing the use of an other schema (here user_schema) than the one from Oracle
schema set in the \s-1SCHEMA\s0 directive.
.Sp
You can also set the default search_path for the PostgreSQL user you are using
to connect to the destination database by using:
.Sp
.Vb 1
\&        ALTER ROLE username SET search_path TO user_schema, public;
.Ve
.Sp
in this case you don't have to set \s-1PG_SCHEMA.\s0
.IP "\s-1SYSUSERS\s0" 4
.IX Item "SYSUSERS"
Without explicit schema, Ora2Pg will export all objects that not belongs to
system schema or role:
.Sp
.Vb 12
\&        SYSTEM,CTXSYS,DBSNMP,EXFSYS,LBACSYS,MDSYS,MGMT_VIEW,
\&        OLAPSYS,ORDDATA,OWBSYS,ORDPLUGINS,ORDSYS,OUTLN,
\&        SI_INFORMTN_SCHEMA,SYS,SYSMAN,WK_TEST,WKSYS,WKPROXY,
\&        WMSYS,XDB,APEX_PUBLIC_USER,DIP,FLOWS_020100,FLOWS_030000,
\&        FLOWS_040100,FLOWS_010600,FLOWS_FILES,MDDATA,ORACLE_OCM,
\&        SPATIAL_CSW_ADMIN_USR,SPATIAL_WFS_ADMIN_USR,XS$NULL,PERFSTAT,
\&        SQLTXPLAIN,DMSYS,TSMSYS,WKSYS,APEX_040000,APEX_040200,
\&        DVSYS,OJVMSYS,GSMADMIN_INTERNAL,APPQOSSYS,DVSYS,DVF,
\&        AUDSYS,APEX_030200,MGMT_VIEW,ODM,ODM_MTR,TRACESRV,MTMSYS,
\&        OWBSYS_AUDIT,WEBSYS,WK_PROXY,OSE$HTTP$ADMIN,
\&        AURORA$JIS$UTILITY$,AURORA$ORB$UNAUTHENTICATED,
\&        DBMS_PRIVILEGE_CAPTURE,CSMIG,MGDSYS,SDE,DBSFWUSER
.Ve
.Sp
Following your Oracle installation you may have several other system role
defined. To append these users to the schema exclusion list, just set the
\&\s-1SYSUSERS\s0 configuration directive to a comma-separated list of system user to
exclude. For example:
.Sp
.Vb 1
\&        SYSUSERS        INTERNAL,SYSDBA,BI,HR,IX,OE,PM,SH
.Ve
.Sp
will add users \s-1INTERNAL\s0 and \s-1SYSDBA\s0 to the schema exclusion list.
.IP "\s-1FORCE_OWNER\s0" 4
.IX Item "FORCE_OWNER"
By default the owner of the database objects is the one you're using to connect
to PostgreSQL using the psql command. If you use an other user (postgres for example)
you can force Ora2Pg to set the object owner to be the one used in the Oracle database
by setting the directive to 1, or to a completely different username by setting the
directive value to that username.
.IP "\s-1FORCE_SECURITY_INVOKER\s0" 4
.IX Item "FORCE_SECURITY_INVOKER"
Ora2Pg use the function's security privileges set in Oracle and it is often
defined as \s-1SECURITY DEFINER.\s0 If you want to override those security privileges
for all functions and use \s-1SECURITY DEFINER\s0 instead, enable this directive.
.IP "\s-1USE_TABLESPACE\s0" 4
.IX Item "USE_TABLESPACE"
When enabled this directive force ora2pg to export all tables, indexes constraint and
indexes using the tablespace name defined in Oracle database. This works only with
tablespace that are not \s-1TEMP, USERS\s0 and \s-1SYSTEM.\s0
.IP "\s-1WITH_OID\s0" 4
.IX Item "WITH_OID"
Activating this directive will force Ora2Pg to add \s-1WITH\s0 (\s-1OIDS\s0) when creating
tables or views as tables. Default is same as PostgreSQL, disabled.
.IP "\s-1LOOK_FORWARD_FUNCTION\s0" 4
.IX Item "LOOK_FORWARD_FUNCTION"
List of schema to get functions/procedures meta information that are used
in the current schema export. When replacing call to function with \s-1OUT\s0
parameters, if a function is declared in an other package then the function
call rewriting can not be done because Ora2Pg only knows about functions
declared in the current schema. By setting a comma separated list of schema
as value of this directive, Ora2Pg will look forward in these packages for
all functions/procedures/packages declaration before proceeding to current
schema export.
.IP "\s-1NO_FUNCTION_METADATA\s0" 4
.IX Item "NO_FUNCTION_METADATA"
Force Ora2Pg to not look for function declaration. Note that this will prevent
Ora2Pg to rewrite function replacement call if needed. Do not enable it unless
looking forward at function breaks other export.
.SS "Export type"
.IX Subsection "Export type"
The export action is perform following a single configuration directive '\s-1TYPE\s0',
some other add more control on what should be really exported.
.IP "\s-1TYPE\s0" 4
.IX Item "TYPE"
Here are the different values of the \s-1TYPE\s0 directive, default is \s-1TABLE:\s0
.Sp
.Vb 10
\&        \- TABLE: Extract all tables with indexes, primary keys, unique keys,
\&          foreign keys and check constraints.
\&        \- VIEW: Extract only views.
\&        \- GRANT: Extract roles converted to Pg groups, users and grants on all
\&          objects.
\&        \- SEQUENCE: Extract all sequence and their last position.
\&        \- TABLESPACE: Extract storage spaces for tables and indexes (Pg >= v8).
\&        \- TRIGGER: Extract triggers defined following actions.
\&        \- FUNCTION: Extract functions.
\&        \- PROCEDURE: Extract procedures.
\&        \- PACKAGE: Extract packages and package bodies.
\&        \- INSERT: Extract data as INSERT statement.
\&        \- COPY: Extract data as COPY statement.
\&        \- PARTITION: Extract range and list Oracle partitions with subpartitions.
\&        \- TYPE: Extract user defined Oracle type.
\&        \- FDW: Export Oracle tables as foreign table for Oracle, MySQL and SQL Server FDW.
\&        \- MVIEW: Export materialized view.
\&        \- QUERY: Try to automatically convert Oracle SQL queries.
\&        \- KETTLE: Generate XML ktr template files to be used by Kettle.
\&        \- DBLINK: Generate oracle foreign data wrapper server to use as dblink.
\&        \- SYNONYM: Export Oracle\*(Aqs synonyms as views on other schema\*(Aqs objects.
\&        \- DIRECTORY: Export Oracle\*(Aqs directories as external_file extension objects.
\&        \- LOAD: Dispatch a list of queries over multiple PostgreSQl connections.
\&        \- TEST: perform a diff between Oracle and PostgreSQL database.
\&        \- TEST_COUNT: perform a row count diff between Oracle and PostgreSQL table.
\&        \- TEST_VIEW: perform a count on both side of number of rows returned by views.
\&        \- TEST_DATA: perform data validation check on rows at both sides.
\&        \- SEQUENCE_VALUES: export DDL to set the last values of sequences
.Ve
.Sp
Only one type of export can be perform at the same time so the \s-1TYPE\s0 directive
must be unique. If you have more than one only the last found in the file will
be registered.
.Sp
Some export type can not or should not be load directly into the PostgreSQL
database and still require little manual editing. This is the case for \s-1GRANT,
TABLESPACE, TRIGGER, FUNCTION, PROCEDURE, TYPE, QUERY\s0 and \s-1PACKAGE\s0 export types
especially if you have \s-1PLSQL\s0 code or Oracle specific \s-1SQL\s0 in it.
.Sp
For \s-1TABLESPACE\s0 you must ensure that file path exist on the system and for
\&\s-1SYNONYM\s0 you may ensure that the object's owners and schemas correspond to
the new PostgreSQL database design.
.Sp
Note that you can chained multiple export by giving to the \s-1TYPE\s0 directive a
comma-separated list of export type, but in this case you must not use \s-1COPY\s0
or \s-1INSERT\s0 with other export type.
.Sp
Ora2Pg will convert Oracle partition using table inheritance, trigger and
functions. See document at Pg site:
http://www.postgresql.org/docs/current/interactive/ddl\-partitioning.html
.Sp
The \s-1TYPE\s0 export allow export of user defined Oracle type. If you don't use the
\&\-\-plsql command line parameter it simply dump Oracle user type asis else Ora2Pg
will try to convert it to PostgreSQL syntax.
.Sp
The \s-1KETTLE\s0 export type requires that the Oracle and PostgreSQL \s-1DNS\s0 are defined.
.Sp
Since Ora2Pg v8.1 there's three new export types:
.Sp
.Vb 7
\&        SHOW_VERSION : display Oracle version
\&        SHOW_SCHEMA  : display the list of schema available in the database.
\&        SHOW_TABLE   : display the list of tables available.
\&        SHOW_COLUMN  : display the list of tables columns available and the
\&                Ora2PG conversion type from Oracle to PostgreSQL that will be
\&                applied. It will also warn you if there\*(Aqs PostgreSQL reserved
\&                words in Oracle object names.
.Ve
.Sp
Here is an example of the \s-1SHOW_COLUMN\s0 output:
.Sp
.Vb 11
\&        [2] TABLE CURRENT_SCHEMA (1 rows) (Warning: \*(AqCURRENT_SCHEMA\*(Aq is a reserved word in PostgreSQL)
\&                CONSTRAINT : NUMBER(22) => bigint (Warning: \*(AqCONSTRAINT\*(Aq is a reserved word in PostgreSQL)
\&                FREEZE : VARCHAR2(25) => varchar(25) (Warning: \*(AqFREEZE\*(Aq is a reserved word in PostgreSQL)
\&        ...
\&        [6] TABLE LOCATIONS (23 rows)
\&                LOCATION_ID : NUMBER(4) => smallint
\&                STREET_ADDRESS : VARCHAR2(40) => varchar(40)
\&                POSTAL_CODE : VARCHAR2(12) => varchar(12)
\&                CITY : VARCHAR2(30) => varchar(30)
\&                STATE_PROVINCE : VARCHAR2(25) => varchar(25)
\&                COUNTRY_ID : CHAR(2) => char(2)
.Ve
.Sp
Those extraction keywords are use to only display the requested information and
exit. This allows you to quickly know on what you are going to work.
.Sp
The \s-1SHOW_COLUMN\s0 allow an other ora2pg command line option: '\-\-allow relname'
or '\-a relname' to limit the displayed information to the given table.
.Sp
The \s-1SHOW_ENCODING\s0 export type will display the \s-1NLS_LANG\s0 and \s-1CLIENT_ENCODING\s0
values that Ora2Pg will used and the real encoding of the Oracle database with
the corresponding client encoding that could be used with PostgreSQL
.Sp
Ora2Pg allow you to export your Oracle, MySQL or \s-1MSSQL\s0 table definition to
be use with the oracle_fdw, mysql_fdw or tds_fdw foreign data wrapper. By using
type \s-1FDW\s0 your tables will be exported as follow:
.Sp
.Vb 5
\&        CREATE FOREIGN TABLE oratab (
\&                id        integer           NOT NULL,
\&                text      character varying(30),
\&                floating  double precision  NOT NULL
\&        ) SERVER oradb OPTIONS (table \*(AqORATAB\*(Aq);
.Ve
.Sp
Now you can use the table like a regular PostgreSQL table.
.Sp
Release 10 adds a new export type destined to evaluate the content of the
database to migrate, in terms of objects and cost to end the migration:
.Sp
.Vb 1
\&        SHOW_REPORT  : show a detailed report of the Oracle database content.
.Ve
.Sp
Here is a sample of report: http://ora2pg.darold.net/report.html
.Sp
There also a more advanced report with migration cost. See the dedicated chapter
about Migration Cost Evaluation.
.IP "\s-1ESTIMATE_COST\s0" 4
.IX Item "ESTIMATE_COST"
Activate the migration cost evaluation. Must only be used with \s-1SHOW_REPORT,
FUNCTION, PROCEDURE, PACKAGE\s0 and \s-1QUERY\s0 export type. Default is disabled.
You may want to use the \-\-estimate_cost command line option instead to activate
this functionality. Note that enabling this directive will force \s-1PLSQL_PGSQL\s0
activation.
.IP "\s-1COST_UNIT_VALUE\s0" 4
.IX Item "COST_UNIT_VALUE"
Set the value in minutes of the migration cost evaluation unit. Default
is five minutes per unit. See \-\-cost_unit_value to change the unit value
at command line.
.IP "\s-1DUMP_AS_HTML\s0" 4
.IX Item "DUMP_AS_HTML"
By default when using \s-1SHOW_REPORT\s0 the migration report is generated as simple
text, enabling this directive will force ora2pg to create a report in \s-1HTML\s0
format.
.Sp
See http://ora2pg.darold.net/report.html for a sample report.
.IP "\s-1HUMAN_DAYS_LIMIT\s0" 4
.IX Item "HUMAN_DAYS_LIMIT"
Use this directive to redefined the number of human-days limit where the
migration assessment level must switch from B to C. Default is set to 10
human-days.
.IP "\s-1JOBS\s0" 4
.IX Item "JOBS"
This configuration directive adds multiprocess support to \s-1COPY, FUNCTION\s0
and \s-1PROCEDURE\s0 export type, the value is the number of process to use.
Default is multiprocess disable.
.Sp
This directive is used to set the number of cores to used to parallelize
data import into PostgreSQL. During \s-1FUNCTION\s0 or \s-1PROCEDURE\s0 export type each
function will be translated to plpgsql using a new process, the performances
gain can be very important when you have tons of function to convert.
.Sp
There's no limitation in parallel processing than the number of cores
and the PostgreSQL I/O performance capabilities.
.Sp
Doesn't work under Windows Operating System, it is simply disabled.
.IP "\s-1ORACLE_COPIES\s0" 4
.IX Item "ORACLE_COPIES"
This configuration directive adds multiprocess support to extract data
from Oracle. The value is the number of process to use to parallelize
the select query. Default is parallel query disable.
.Sp
The parallelism is built on splitting the query following of the number
of cores given as value to \s-1ORACLE_COPIES\s0 as follow:
.Sp
.Vb 1
\&        SELECT * FROM MYTABLE WHERE ABS(MOD(COLUMN, ORACLE_COPIES)) = CUR_PROC
.Ve
.Sp
where \s-1COLUMN\s0 is a technical key like a primary or unique key where split
will be based and the current core used by the query (\s-1CUR_PROC\s0). You can
also force the column name to use using the \s-1DEFINED_PK\s0 configuration directive.
.Sp
Doesn't work under Windows Operating System, it is simply disabled.
.IP "\s-1DEFINED_PK\s0" 4
.IX Item "DEFINED_PK"
This directive is used to defined the technical key to used to split
the query between number of cores set with the \s-1ORACLE_COPIES\s0 variable.
For example:
.Sp
.Vb 1
\&        DEFINED_PK      EMPLOYEES:employee_id
.Ve
.Sp
The parallel query that will be used supposing that \-J or \s-1ORACLE_COPIES\s0
is set to 8:
.Sp
.Vb 1
\&        SELECT * FROM EMPLOYEES WHERE ABS(MOD(employee_id, 8)) = N
.Ve
.Sp
where N is the current process forked starting from 0.
.IP "\s-1PARALLEL_TABLES\s0" 4
.IX Item "PARALLEL_TABLES"
This directive is used to defined the number of tables that will be processed
in parallel for data extraction. The limit is the number of cores on your machine.
Ora2Pg will open one database connection for each parallel table extraction.
This directive, when upper than 1, will invalidate \s-1ORACLE_COPIES\s0 but not \s-1JOBS,\s0
so the real number of process that will be used is \s-1PARALLEL_TABLES\s0 * \s-1JOBS.\s0
.Sp
Note that this directive when set upper that 1 will also automatically enable
the \s-1FILE_PER_TABLE\s0 directive if your are exporting to files. This is used to
export tables and views in separate files.
.Sp
Use \s-1PARALLEL_TABLES\s0 to use parallelism with \s-1COPY, INSERT\s0 and \s-1TEST_DATA\s0 actions.
It is also useful with \s-1TEST, TEST_COUNT,\s0 and \s-1SHOW_TABLE\s0 if \-\-count_rows is
used for real row count.
.IP "\s-1DEFAULT_PARALLELISM_DEGREE\s0" 4
.IX Item "DEFAULT_PARALLELISM_DEGREE"
You can force Ora2Pg to use /*+ \s-1PARALLEL\s0(tbname, degree) */ hint in each
query used to export data from Oracle by setting a value upper than 1 to
this directive. A value of 0 or 1 disable the use of parallel hint.
Default is disabled.
.IP "\s-1FDW_SERVER\s0" 4
.IX Item "FDW_SERVER"
This directive is used to set the name of the foreign data server that is used
in the \*(L"\s-1CREATE SERVER\s0 name \s-1FOREIGN DATA WRAPPER\s0 <fdw_extension> ...\*(R" command.
This name will then be used in the \*(L"\s-1CREATE FOREIGN TABLE ...\*(R" SQL\s0 commands and
to import data using oracle_fdw. Default is no foreign server defined.
This only concerns export type \s-1FDW, COPY\s0 and \s-1INSERT.\s0 For export type \s-1FDW\s0 the
default value is orcl.
.IP "\s-1FDW_IMPORT_SCHEMA\s0" 4
.IX Item "FDW_IMPORT_SCHEMA"
Schema where foreign tables for data migration will be created. If you use
several instances of ora2pg for data migration through the foreign data
wrapper, you might need to change the name of the schema for each instance.
Default: ora2pg_fdw_import
.IP "\s-1DROP_FOREIGN_SCHEMA\s0" 4
.IX Item "DROP_FOREIGN_SCHEMA"
By default Ora2Pg drops the temporary schema ora2pg_fdw_import used to import
the Oracle foreign schema before each new import. If you want to preserve
the existing schema because of modifications or the use of a third party
server, disable this directive.
.IP "\s-1EXTERNAL_TO_FDW\s0" 4
.IX Item "EXTERNAL_TO_FDW"
This directive, enabled by default, allow to export Oracle's External Tables as
file_fdw foreign tables. To not export these tables at all, set the directive
to 0.
.IP "\s-1INTERNAL_DATE_MAX\s0" 4
.IX Item "INTERNAL_DATE_MAX"
Internal timestamp retrieves from custom type are extracted in the following
format: 01\-JAN\-77 12.00.00.000000 \s-1AM.\s0 It is impossible to know the exact century
that must be used, so by default any year below 49 will be added to 2000
and others to 1900. You can use this directive to change the default value 49.
this is only relevant if you have user defined type with a column timestamp.
.IP "\s-1AUDIT_USER\s0" 4
.IX Item "AUDIT_USER"
Set the comma separated list of username that must be used to filter
queries from the \s-1DBA_AUDIT_TRAIL\s0 table. Default is to not scan this
table and to never look for queries. This parameter is used only with
\&\s-1SHOW_REPORT\s0 and \s-1QUERY\s0 export type with no input file for queries.
Note that queries will be normalized before output unlike when a file
is given at input using the \-i option or \s-1INPUT\s0 directive.
.IP "\s-1FUNCTION_CHECK\s0" 4
.IX Item "FUNCTION_CHECK"
Disable this directive if you want to disable check_function_bodies.
.Sp
.Vb 1
\&        SET check_function_bodies = false;
.Ve
.Sp
It disables validation of the function body string during \s-1CREATE FUNCTION.\s0
Default is to use de postgresql.conf setting that enable it by default.
.IP "\s-1ENABLE_BLOB_EXPORT\s0" 4
.IX Item "ENABLE_BLOB_EXPORT"
Exporting \s-1BLOB\s0 takes time, in some circumstances you may want to export
all data except the \s-1BLOB\s0 columns. In this case disable this directive and
the \s-1BLOB\s0 columns will not be included into data export. Take care that the
target bytea column do not have a \s-1NOT NULL\s0 constraint.
.IP "\s-1ENABLE_CLOB_EXPORT\s0" 4
.IX Item "ENABLE_CLOB_EXPORT"
Same behavior as \s-1ENABLE_BLOB_EXPORT\s0 but for \s-1CLOB.\s0
.IP "\s-1DATA_EXPORT_ORDER\s0" 4
.IX Item "DATA_EXPORT_ORDER"
By default data export order will be done by sorting on table name. If you
have huge tables at end of alphabetic order and you are using multiprocess,
it can be better to set the sort order on size so that multiple small tables
can be processed before the largest tables finish. In this case set this
directive to size. Possible values are name and size. Note that export type
\&\s-1SHOW_TABLE\s0 and \s-1SHOW_COLUMN\s0 will use this sort order too, not only \s-1COPY\s0 or
\&\s-1INSERT\s0 export type.
.SS "Limiting objects to export"
.IX Subsection "Limiting objects to export"
You may want to export only a part of an Oracle database, here are a set of
configuration directives that will allow you to control what parts of the
database should be exported.
.IP "\s-1ALLOW\s0" 4
.IX Item "ALLOW"
This directive allows you to set a list of objects on which the export must be
limited, excluding all other objects in the same type of export. The value is
a space or comma-separated list of objects name to export. You can include
valid regex into the list. For example:
.Sp
.Vb 1
\&        ALLOW           EMPLOYEES SALE_.* COUNTRIES .*_GEOM_SEQ
.Ve
.Sp
will export objects with name \s-1EMPLOYEES, COUNTRIES,\s0 all objects beginning with
\&'\s-1SALE_\s0' and all objects with a name ending by '_GEOM_SEQ'. The object depends
of the export type. Note that regex will not works with 8i database, you must
use the % placeholder instead, Ora2Pg will use the \s-1LIKE\s0 operator.
.Sp
This is the manner to declare global filters that will be used with the current
export type. You can also use extended filters that will be applied on specific
objects or only on their related export type. For example:
.Sp
.Vb 1
\&        ora2pg \-p \-c ora2pg.conf \-t TRIGGER \-a \*(AqTABLE[employees]\*(Aq
.Ve
.Sp
will limit export of trigger to those defined on table employees. If you want
to extract all triggers but not some \s-1INSTEAD OF\s0 triggers:
.Sp
.Vb 1
\&        ora2pg \-c ora2pg.conf \-t TRIGGER \-e \*(AqVIEW[trg_view_.*]\*(Aq
.Ve
.Sp
Or a more complex form:
.Sp
.Vb 2
\&        ora2pg \-p \-c ora2pg.conf \-t TABLE \-a \*(AqTABLE[EMPLOYEES]\*(Aq \e
\&                \-e \*(AqINDEX[emp_.*];CKEY[emp_salary_min]\*(Aq
.Ve
.Sp
This command will export the definition of the employee table but will exclude
all index beginning with 'emp_' and the \s-1CHECK\s0 constraint called 'emp_salary_min'.
.Sp
When exporting partition you can exclude some partition tables by using
.Sp
.Vb 1
\&        ora2pg \-p \-c ora2pg.conf \-t PARTITION \-e \*(AqPARTITION[PART_199.* PART_198.*]\*(Aq
.Ve
.Sp
This will exclude partitioned tables for year 1980 to 1999 from the export but
not the main partition table. The trigger will also be adapted to exclude those
table.
.Sp
With \s-1GRANT\s0 export you can use this extended form to exclude some users from the
export or limit the export to some others:
.Sp
.Vb 1
\&        ora2pg \-p \-c ora2pg.conf \-t GRANT \-a \*(AqUSER1 USER2\*(Aq
.Ve
.Sp
or
.Sp
.Vb 1
\&        ora2pg \-p \-c ora2pg.conf \-t GRANT \-a \*(AqGRANT[USER1 USER2]\*(Aq
.Ve
.Sp
will limit export grants to users \s-1USER1\s0 and \s-1USER2.\s0 But if you don't want to
export grants on some functions for these users, for example:
.Sp
.Vb 1
\&        ora2pg \-p \-c ora2pg.conf \-t GRANT \-a \*(AqUSER1 USER2\*(Aq \-e \*(AqFUNCTION[adm_.*];PROCEDURE[adm_.*]\*(Aq
.Ve
.Sp
Advanced filters may need some learning.
.Sp
Oracle doesn't allow the use of lookahead expression so you may want to exclude
some object that match the \s-1ALLOW\s0 regexp you have defined. For example if you
want to export all table starting with E but not those starting with \s-1EXP\s0 it is
not possible to do that in a single expression. This is why you can start a
regular expression with the ! character to exclude object matching the regexp
given just after. Our previous example can be written as follow:
.Sp
.Vb 1
\&        ALLOW   E.* !EXP.*
.Ve
.Sp
it will be translated into:
.Sp
.Vb 1
\&         REGEXP_LIKE(..., \*(Aq^E.*$\*(Aq) AND NOT REGEXP_LIKE(..., \*(Aq^EXP.*$\*(Aq)
.Ve
.Sp
in the object search expression.
.IP "\s-1EXCLUDE\s0" 4
.IX Item "EXCLUDE"
This directive is the opposite of the previous, it allow you to define a space
or comma-separated list of object name to exclude from the export. You can
include valid regex into the list. For example:
.Sp
.Vb 1
\&        EXCLUDE         EMPLOYEES TMP_.* COUNTRIES
.Ve
.Sp
will exclude object with name \s-1EMPLOYEES, COUNTRIES\s0 and all tables beginning with
\&'tmp_'.
.Sp
For example, you can ban from export some unwanted function with this directive:
.Sp
.Vb 1
\&        EXCLUDE         write_to_.* send_mail_.*
.Ve
.Sp
this example will exclude all functions, procedures or functions in a package
with the name beginning with those regex. Note that regex will not work with
8i database, you must use the % placeholder instead, Ora2Pg will use the \s-1NOT
LIKE\s0 operator.
.Sp
See above (directive '\s-1ALLOW\s0') for the extended syntax.
.IP "\s-1NO_EXCLUDED_TABLE\s0" 4
.IX Item "NO_EXCLUDED_TABLE"
By default Ora2Pg exclude from export some Oracle \*(L"garbage\*(R" tables that should
never be part of an export. This behavior generates a lot of \s-1REGEXP_LIKE\s0
expressions which are slowing down the export when looking at tables. To disable
this behavior enable this directive, you will have to exclude or clean up later
by yourself the unwanted tables. The regexp used to exclude the table are
defined in the array \f(CW@EXCLUDED_TABLES\fR in lib/Ora2Pg.pm. Note this is behavior
is independant to the \s-1EXCLUDE\s0 configuration directive.
.IP "\s-1VIEW_AS_TABLE\s0" 4
.IX Item "VIEW_AS_TABLE"
Set which view to export as table. By default none. Value must be a list of
view name or regexp separated by space or comma. If the object name is a view
and the export type is \s-1TABLE,\s0 the view will be exported as a create table
statement. If export type is \s-1COPY\s0 or \s-1INSERT,\s0 the corresponding data will be
exported.
.Sp
See chapter \*(L"Exporting views as PostgreSQL table\*(R" for more details.
.IP "\s-1MVIEW_AS_TABLE\s0" 4
.IX Item "MVIEW_AS_TABLE"
Set which materialized view to export as table. By default none. Value must be
a list of materialized view name or regexp separated by space or comma. If the
object name is a materialized view and the export type is \s-1TABLE,\s0 the view will
be exported as a create table statement. If export type is \s-1COPY\s0 or \s-1INSERT,\s0 the
corresponding data will be exported.
.IP "\s-1NO_VIEW_ORDERING\s0" 4
.IX Item "NO_VIEW_ORDERING"
By default Ora2Pg try to order views to avoid error at import time with
nested views. With a huge number of views this can take a very long time,
you can bypass this ordering by enabling this directive.
.IP "\s-1GRANT_OBJECT\s0" 4
.IX Item "GRANT_OBJECT"
When exporting GRANTs you can specify a comma separated list of objects
for which privilege will be exported. Default is export for all objects.
Here are the possibles values \s-1TABLE, VIEW, MATERIALIZED VIEW, SEQUENCE,
PROCEDURE, FUNCTION, PACKAGE BODY, TYPE, SYNONYM, DIRECTORY.\s0 Only one object
type is allowed at a time. For example set it to \s-1TABLE\s0 if you just want to
export privilege on tables. You can use the \-g option to overwrite it.
.Sp
When used this directive prevent the export of users unless it is set to \s-1USER.\s0
In this case only users definitions are exported.
.IP "\s-1WHERE\s0" 4
.IX Item "WHERE"
This directive allows you to specify a \s-1WHERE\s0 clause filter when dumping the
contents of tables. Value is constructs as follows: TABLE_NAME[\s-1WHERE_CLAUSE\s0],
or if you have only one where clause for each table just put the where clause
as the value. Both are possible too. Here are some examples:
.Sp
.Vb 2
\&        # Global where clause applying to all tables included in the export
\&        WHERE  1=1
\&
\&        # Apply the where clause only on table TABLE_NAME
\&        WHERE  TABLE_NAME[ID1=\*(Aq001\*(Aq]
\&
\&        # Applies two different clause on tables TABLE_NAME and OTHER_TABLE
\&        # and a generic where clause on DATE_CREATE to all other tables
\&        WHERE  TABLE_NAME[ID1=\*(Aq001\*(Aq OR ID1=\*(Aq002] DATE_CREATE > \*(Aq2001\-01\-01\*(Aq OTHER_TABLE[NAME=\*(Aqtest\*(Aq]
.Ve
.Sp
Any where clause not included into a table name bracket clause will be applied
to all exported table including the tables defined in the where clause. These
\&\s-1WHERE\s0 clauses are very useful if you want to archive some data or at the
opposite only export some recent data.
.Sp
To be able to quickly test data import it is useful to limit data export to the
first thousand tuples of each table. For Oracle define the following clause:
.Sp
.Vb 1
\&        WHERE   ROWNUM < 1000
.Ve
.Sp
and for MySQL, use the following:
.Sp
.Vb 1
\&        WHERE   1=1 LIMIT 1,1000
.Ve
.Sp
This can also be restricted to some tables data export.
.Sp
Command line option \-W or \-\-where will override this directive for the global
part and per table if the table names is the same.
.IP "\s-1TOP_MAX\s0" 4
.IX Item "TOP_MAX"
This directive is used to limit the number of item shown in the top N lists
like the top list of tables per number of rows and the top list of largest
tables in megabytes. By default it is set to 10 items.
.IP "\s-1LOG_ON_ERROR\s0" 4
.IX Item "LOG_ON_ERROR"
Enable this directive if you want to continue direct data import on error.
When Ora2Pg received an error in the \s-1COPY\s0 or \s-1INSERT\s0 statement from PostgreSQL
it will log the statement to a file called TABLENAME_error.log in the output
directory and continue to next bulk of data. Like this you can try to fix the
statement and manually reload the error log file. Default is disabled: abort
import on error.
.IP "\s-1REPLACE_QUERY\s0" 4
.IX Item "REPLACE_QUERY"
Sometime you may want to extract data from an Oracle table but you need a
custom query for that. Not just a \*(L"\s-1SELECT\s0 * \s-1FROM\s0 table\*(R" like Ora2Pg do
but a more complex query. This directive allows you to overwrite the query
used by Ora2Pg to extract data. The format is TABLENAME[\s-1SQL_QUERY\s0].
If you have multiple table to extract by replacing the Ora2Pg query, you can
define multiple \s-1REPLACE_QUERY\s0 lines.
.Sp
.Vb 1
\&        REPLACE_QUERY   EMPLOYEES[SELECT e.id,e.fisrtname,lastname FROM EMPLOYEES e JOIN EMP_UPDT u ON (e.id=u.id AND u.cdate>\*(Aq2014\-08\-01 00:00:00\*(Aq)]
.Ve
.SS "Control of Full Text Search export"
.IX Subsection "Control of Full Text Search export"
Several directives can be used to control the way Ora2Pg will export the
Oracle's Text search indexes. By default \s-1CONTEXT\s0 indexes will be exported
to PostgreSQL \s-1FTS\s0 indexes but \s-1CTXCAT\s0 indexes will be exported as indexes
using the pg_trgm extension.
.IP "\s-1CONTEXT_AS_TRGM\s0" 4
.IX Item "CONTEXT_AS_TRGM"
Force Ora2Pg to translate Oracle Text indexes into PostgreSQL indexes using
pg_trgm extension. Default is to translate \s-1CONTEXT\s0 indexes into \s-1FTS\s0 indexes
and \s-1CTXCAT\s0 indexes using pg_trgm. Most of the time using pg_trgm is enough,
this is why this directive stand for. You need to create the pg_trgm extension
into the destination database before importing the objects:
.Sp
.Vb 1
\&        CREATE EXTENSION pg_trgm;
.Ve
.IP "\s-1FTS_INDEX_ONLY\s0" 4
.IX Item "FTS_INDEX_ONLY"
By default Ora2Pg creates a function-based index to translate Oracle Text
indexes.
.Sp
.Vb 2
\&        CREATE INDEX ON t_document
\&                USING gin(to_tsvector(\*(Aqpg_catalog.french\*(Aq, title));
.Ve
.Sp
You will have to rewrite the \s-1\fBCONTAIN\s0()\fR clause using \fBto_tsvector()\fR, example:
.Sp
.Vb 2
\&        SELECT id,title FROM t_document
\&                WHERE to_tsvector(title)) @@ to_tsquery(\*(Aqsearch_word\*(Aq);
.Ve
.Sp
To force Ora2Pg to create an extra tsvector column with a dedicated triggers
for \s-1FTS\s0 indexes, disable this directive. In this case, Ora2Pg will add the
column as follow: \s-1ALTER TABLE\s0 t_document \s-1ADD COLUMN\s0 tsv_title tsvector;
Then update the column to compute \s-1FTS\s0 vectors if data have been loaded before
	    \s-1UPDATE\s0 t_document \s-1SET\s0 tsv_title =
		to_tsvector('pg_catalog.french', coalesce(title,''));
To automatically update the column when a modification in the title column
appears, Ora2Pg adds the following trigger:
.Sp
.Vb 12
\&        CREATE FUNCTION tsv_t_document_title() RETURNS trigger AS $$
\&        BEGIN
\&               IF TG_OP = \*(AqINSERT\*(Aq OR new.title != old.title THEN
\&                       new.tsv_title :=
\&                       to_tsvector(\*(Aqpg_catalog.french\*(Aq, coalesce(new.title,\*(Aq\*(Aq));
\&               END IF;
\&               return new;
\&        END
\&        $$ LANGUAGE plpgsql;
\&        CREATE TRIGGER trig_tsv_t_document_title BEFORE INSERT OR UPDATE
\&         ON t_document
\&         FOR EACH ROW EXECUTE PROCEDURE tsv_t_document_title();
.Ve
.Sp
When the Oracle text index is defined over multiple column, Ora2Pg will use
\&\fBsetweight()\fR to set a weight in the order of the column declaration.
.IP "\s-1FTS_CONFIG\s0" 4
.IX Item "FTS_CONFIG"
Use this directive to force text search configuration to use. When it is not
set, Ora2Pg will autodetect the stemmer used by Oracle for each index and
pg_catalog.english if the information is not found.
.IP "\s-1USE_UNACCENT\s0" 4
.IX Item "USE_UNACCENT"
If you want to perform your text search in an accent insensitive way, enable
this directive. Ora2Pg will create an helper function over \fBunaccent()\fR and
creates the pg_trgm indexes using this function. With \s-1FTS\s0 Ora2Pg will
redefine your text search configuration, for example:
.Sp
.Vb 3
\&      CREATE TEXT SEARCH CONFIGURATION fr (COPY = french); 
\&      ALTER TEXT SEARCH CONFIGURATION fr
\&              ALTER MAPPING FOR hword, hword_part, word WITH unaccent, french_stem;
.Ve
.Sp
then set the \s-1FTS_CONFIG\s0 ora2pg.conf directive to fr instead of pg_catalog.english.
.Sp
When enabled, Ora2pg will create the wrapper function:
.Sp
.Vb 6
\&      CREATE OR REPLACE FUNCTION unaccent_immutable(text)
\&      RETURNS text AS
\&      $$
\&          SELECT public.unaccent(\*(Aqpublic.unaccent\*(Aq, $1);
\&      $$ LANGUAGE sql IMMUTABLE
\&         COST 1;
.Ve
.Sp
the indexes are exported as follow:
.Sp
.Vb 2
\&      CREATE INDEX t_document_title_unaccent_trgm_idx ON t_document 
\&          USING gin (unaccent_immutable(title) gin_trgm_ops);
.Ve
.Sp
In your queries you will need to use the same function in the search to
be able to use the function-based index. Example:
.Sp
.Vb 2
\&        SELECT * FROM t_document
\&                WHERE unaccent_immutable(title) LIKE \*(Aq%donnees%\*(Aq;
.Ve
.IP "\s-1USE_LOWER_UNACCENT\s0" 4
.IX Item "USE_LOWER_UNACCENT"
Same as above but call \fBlower()\fR in the \fBunaccent_immutable()\fR function:
.Sp
.Vb 5
\&      CREATE OR REPLACE FUNCTION unaccent_immutable(text)
\&      RETURNS text AS
\&      $$
\&          SELECT lower(public.unaccent(\*(Aqpublic.unaccent\*(Aq, $1));
\&      $$ LANGUAGE sql IMMUTABLE;
.Ve
.SS "Modifying object structure"
.IX Subsection "Modifying object structure"
One of the great usage of Ora2Pg is its flexibility to replicate Oracle database
into PostgreSQL database with a different structure or schema. There's three
configuration directives that allow you to map those differences.
.IP "\s-1REORDERING_COLUMNS\s0" 4
.IX Item "REORDERING_COLUMNS"
Enable this directive to reordering columns and minimized the footprint
on disc, so that more rows fit on a data page, which is the most important
factor for speed. Default is disabled, that mean the same order than in
Oracle tables definition, that's should be enough for most usage. This
directive is only used with \s-1TABLE\s0 export.
.IP "\s-1MODIFY_STRUCT\s0" 4
.IX Item "MODIFY_STRUCT"
This directive allows you to limit the columns to extract for a given table. The
value consist in a space-separated list of table name with a set of column
between parenthesis as follow:
.Sp
.Vb 1
\&        MODIFY_STRUCT   NOM_TABLE(nomcol1,nomcol2,...) ...
.Ve
.Sp
for example:
.Sp
.Vb 1
\&        MODIFY_STRUCT   T_TEST1(id,dossier) T_TEST2(id,fichier)
.Ve
.Sp
This will only extract columns 'id' and 'dossier' from table T_TEST1 and columns
\&'id' and 'fichier' from the T_TEST2 table. This directive can only be used with
\&\s-1TABLE, COPY\s0 or \s-1INSERT\s0 export. With \s-1TABLE\s0 export create table \s-1DDL\s0 will respect
the new list of columns and all indexes or foreign key pointing to or from a
column removed will not be exported.
.IP "\s-1EXCLUDE_COLUMNS\s0" 4
.IX Item "EXCLUDE_COLUMNS"
Instead of redefining the table structure with \s-1MODIFY_STRUCT\s0 you may want
to exclude some columns from the table export. The value consist in a
space-separated list of table name with a set of column between parenthesis
as follow:
.Sp
.Vb 1
\&        EXCLUDE_COLUMNS NOM_TABLE(nomcol1,nomcol2,...) ...
.Ve
.Sp
for example:
.Sp
.Vb 1
\&        EXCLUDE_COLUMNS T_TEST1(id,dossier) T_TEST2(id,fichier)
.Ve
.Sp
This will exclude from the export columns 'id' and 'dossier' from table T_TEST1
and columns 'id' and 'fichier' from the T_TEST2 table. This directive can only
be used with \s-1TABLE, COPY\s0 or \s-1INSERT\s0 export. With \s-1TABLE\s0 export create table \s-1DDL\s0
will respect the new list of columns and all indexes or foreign key pointing
to or from a column removed will not be exported.
.IP "\s-1REPLACE_TABLES\s0" 4
.IX Item "REPLACE_TABLES"
This directive allows you to remap a list of Oracle table name to a PostgreSQL table name during export. The value is a list of space-separated values with the following structure:
.Sp
.Vb 1
\&        REPLACE_TABLES  ORIG_TBNAME1:DEST_TBNAME1 ORIG_TBNAME2:DEST_TBNAME2
.Ve
.Sp
Oracle tables \s-1ORIG_TBNAME1\s0 and \s-1ORIG_TBNAME2\s0 will be respectively renamed into
\&\s-1DEST_TBNAME1\s0 and \s-1DEST_TBNAME2\s0
.IP "\s-1REPLACE_COLS\s0" 4
.IX Item "REPLACE_COLS"
Like table name, the name of the column can be remapped to a different name
using the following syntax:
.Sp
.Vb 1
\&        REPLACE_COLS    ORIG_TBNAME(ORIG_COLNAME1:NEW_COLNAME1,ORIG_COLNAME2:NEW_COLNAME2)
.Ve
.Sp
For example:
.Sp
.Vb 1
\&        REPLACE_COLS    T_TEST(dico:dictionary,dossier:folder)
.Ve
.Sp
will rename Oracle columns 'dico' and 'dossier' from table T_TEST into new name
\&'dictionary' and 'folder'.
.IP "\s-1REPLACE_AS_BOOLEAN\s0" 4
.IX Item "REPLACE_AS_BOOLEAN"
If you want to change the type of some Oracle columns into PostgreSQL boolean
during the export you can define here a list of tables and column separated by
space as follow.
.Sp
.Vb 1
\&        REPLACE_AS_BOOLEAN     TB_NAME1:COL_NAME1 TB_NAME1:COL_NAME2 TB_NAME2:COL_NAME2
.Ve
.Sp
The values set in the boolean columns list will be replaced with the 't' and 'f'
following the default replacement values and those additionally set in directive
\&\s-1BOOLEAN_VALUES.\s0
.Sp
Note that if you have modified the table name with \s-1REPLACE_TABLES\s0 and/or the
column's name, you need to use the name of the original table and/or column.
.Sp
.Vb 2
\&        REPLACE_COLS            TB_NAME1(OLD_COL_NAME1:NEW_COL_NAME1)
\&        REPLACE_AS_BOOLEAN      TB_NAME1:OLD_COL_NAME1
.Ve
.Sp
You can also give a type and a precision to automatically convert all fields of
that type as a boolean. For example:
.Sp
.Vb 1
\&        REPLACE_AS_BOOLEAN      NUMBER:1 CHAR:1 TB_NAME1:COL_NAME1 TB_NAME1:COL_NAME2
.Ve
.Sp
will also replace any field of type \fBnumber\fR\|(1) or \fBchar\fR\|(1) as a boolean in all exported
tables.
.IP "\s-1BOOLEAN_VALUES\s0" 4
.IX Item "BOOLEAN_VALUES"
Use this to add additional definition of the possible boolean values used in
Oracle fields. You must set a space-separated list of \s-1TRUE:FALSE\s0 values. By
default here are the values recognized by Ora2Pg:
.Sp
.Vb 1
\&        BOOLEAN_VALUES          yes:no y:n 1:0 true:false enabled:disabled
.Ve
.Sp
Any values defined here will be added to the default list.
.IP "\s-1REPLACE_ZERO_DATE\s0" 4
.IX Item "REPLACE_ZERO_DATE"
When Ora2Pg find a \*(L"zero\*(R" date: 0000\-00\-00 00:00:00 it is replaced by a \s-1NULL.\s0
This could be a problem if your column is defined with \s-1NOT NULL\s0 constraint.
If you can not remove the constraint, use this directive to set an arbitral
date that will be used instead. You can also use \-INFINITY if you don't want
to use a fake date.
.IP "\s-1INDEXES_SUFFIX\s0" 4
.IX Item "INDEXES_SUFFIX"
Add the given value as suffix to indexes names. Useful if you have indexes
with same name as tables. For example:
.Sp
.Vb 1
\&        INDEXES_SUFFIX          _idx
.Ve
.Sp
will add _idx at ed of all index name. Not so common but can help.
.IP "\s-1INDEXES_RENAMING\s0" 4
.IX Item "INDEXES_RENAMING"
Enable this directive to rename all indexes using tablename_columns_names.
Could be very useful for database that have multiple time the same index name
or that use the same name than a table, which is not allowed by PostgreSQL
Disabled by default.
.IP "\s-1USE_INDEX_OPCLASS\s0" 4
.IX Item "USE_INDEX_OPCLASS"
Operator classes text_pattern_ops, varchar_pattern_ops, and bpchar_pattern_ops
support B\-tree indexes on the corresponding types. The difference from the
default operator classes is that the values are compared strictly character by
character rather than according to the locale-specific collation rules. This
makes these operator classes suitable for use by queries involving pattern
matching expressions (\s-1LIKE\s0 or \s-1POSIX\s0 regular expressions) when the database
does not use the standard \*(L"C\*(R" locale. If you enable, with value 1, this will
force Ora2Pg to export all indexes defined on \fBvarchar2()\fR and \fBchar()\fR columns
using those operators. If you set it to a value greater than 1 it will only
change indexes on columns where the character limit is greater or equal than
this value. For example, set it to 128 to create these kind of indexes on
columns of type varchar2(N) where N >= 128.
.IP "\s-1RENAME_PARTITION\s0" 4
.IX Item "RENAME_PARTITION"
Enable this directive if you want that your partition tables will be renamed.
Disabled by default. If you have multiple partitioned table, when exported to
PostgreSQL some partitions could have the same name but different parent tables.
This is not allowed, table name must be unique, in this case enable this
directive. A partition will be renamed following the rule:
    \*(L"tablename\*(R"_part\*(L"pos\*(R"
where \*(L"pos\*(R" is the partition number. For subpartition this is:
    \*(L"tablename\*(R"_part\*(L"pos\*(R"_subpart\*(L"pos\*(R"
If this is partition/subpartition default:
    \*(L"tablename\*(R"_part_default
    \*(L"tablename\*(R"_part\*(L"pos\*(R"_subpart_default
.IP "\s-1DISABLE_PARTITION\s0" 4
.IX Item "DISABLE_PARTITION"
If you don't want to reproduce the partitioning like in Oracle and want to
export all partitioned Oracle data into the main single table in PostgreSQL
enable this directive. Ora2Pg will export all data into the main table name.
Default is to use partitioning, Ora2Pg will export data from each partition
and import them into the PostgreSQL dedicated partition table.
.IP "\s-1DISABLE_UNLOGGED\s0" 4
.IX Item "DISABLE_UNLOGGED"
By default Ora2Pg export Oracle tables with the \s-1NOLOGGING\s0 attribute as
\&\s-1UNLOGGED\s0 tables. You may want to fully disable this feature because
you will lose all data from unlogged tables in case of a PostgreSQL crash.
Set it to 1 to export all tables as normal tables.
.IP "\s-1DOUBLE_MAX_VARCHAR\s0" 4
.IX Item "DOUBLE_MAX_VARCHAR"
Increase varchar max character constraints to support PostgreSQL two bytes
character encoding when the source database applies the length constraint
on characters not bytes. Default disabled.
.SS "Oracle Spatial to PostGis"
.IX Subsection "Oracle Spatial to PostGis"
Ora2Pg fully export Spatial object from Oracle database. There's some
configuration directives that could be used to control the export.
.IP "\s-1AUTODETECT_SPATIAL_TYPE\s0" 4
.IX Item "AUTODETECT_SPATIAL_TYPE"
By default Ora2Pg is looking at indexes to see the spatial constraint type
and dimensions defined under Oracle. Those constraints are passed as at index
creation using for example:
.Sp
.Vb 2
\&        CREATE INDEX ... INDEXTYPE IS MDSYS.SPATIAL_INDEX
\&        PARAMETERS(\*(Aqsdo_indx_dims=2, layer_gtype=point\*(Aq);
.Ve
.Sp
If those Oracle constraints parameters are not set, the default is to export
those columns as generic type \s-1GEOMETRY\s0 to be able to receive any spatial type.
.Sp
The \s-1AUTODETECT_SPATIAL_TYPE\s0 directive allows to force Ora2Pg to autodetect the
real spatial type and dimension used in a spatial column otherwise a non\-
constrained \*(L"geometry\*(R" type is used. Enabling this feature will force Ora2Pg to
scan a sample of 50000 column to look at the \s-1GTYPE\s0 used. You can increase or
reduce the sample size by setting the value of \s-1AUTODETECT_SPATIAL_TYPE\s0 to the
desired number of line to scan. The directive is enabled by default.
.Sp
For example, in the case of a column named shape and defined with Oracle type
\&\s-1SDO_GEOMETRY,\s0 with \s-1AUTODETECT_SPATIAL_TYPE\s0 disabled it will be converted as:
.Sp
.Vb 1
\&    shape geometry(GEOMETRY) or shape geometry(GEOMETRYZ, 4326)
.Ve
.Sp
and if the directive is enabled and the column just contains a single
geometry type that use a single dimension:
.Sp
.Vb 1
\&    shape geometry(POLYGON, 4326) or shape geometry(POLYGONZ, 4326)
.Ve
.Sp
with a two or three dimensional polygon.
.IP "\s-1CONVERT_SRID\s0" 4
.IX Item "CONVERT_SRID"
This directive allows you to control the automatically conversion of Oracle
\&\s-1SRID\s0 to standard \s-1EPSG.\s0 If enabled, Ora2Pg will use the Oracle function
sdo_cs.\fBmap_oracle_srid_to_epsg()\fR to convert all \s-1SRID.\s0 Enabled by default.
.Sp
If the \s-1SDO_SRID\s0 returned by Oracle is \s-1NULL,\s0 it will be replaced by the
default value 8307 converted to its \s-1EPSG\s0 value: 4326 (see \s-1DEFAULT_SRID\s0).
.Sp
If the value is upper than 1, all \s-1SRID\s0 will be forced to this value, in
this case \s-1DEFAULT_SRID\s0 will not be used when Oracle returns a null value
and the value will be forced to \s-1CONVERT_SRID.\s0
.Sp
Note that it is also possible to set the \s-1EPSG\s0 value on Oracle side when
sdo_cs.\fBmap_oracle_srid_to_epsg()\fR return \s-1NULL\s0 if your want to force the value:
.Sp
.Vb 1
\&  system@db> UPDATE sdo_coord_ref_sys SET legacy_code=41014 WHERE srid = 27572;
.Ve
.IP "\s-1DEFAULT_SRID\s0" 4
.IX Item "DEFAULT_SRID"
Use this directive to override the default \s-1EPSG SRID\s0 to used: 4326.
Can be overwritten by \s-1CONVERT_SRID,\s0 see above.
.IP "\s-1GEOMETRY_EXTRACT_TYPE\s0" 4
.IX Item "GEOMETRY_EXTRACT_TYPE"
This directive can take three values: \s-1WKT\s0 (default), \s-1WKB\s0 and \s-1INTERNAL.\s0
When it is set to \s-1WKT,\s0 Ora2Pg will use \s-1SDO_UTIL.\fBTO_WKTGEOMETRY\s0()\fR to
extract the geometry data. When it is set to \s-1WKB,\s0 Ora2Pg will use the
binary output using \s-1SDO_UTIL.\fBTO_WKBGEOMETRY\s0()\fR. If those two extract type
are calls at Oracle side, they are slow and you can easily reach Out Of
Memory when you have lot of rows. Also \s-1WKB\s0 is not able to export 3D geometry
and some geometries like \s-1CURVEPOLYGON.\s0 In this case you may use the \s-1INTERNAL\s0
extraction type. It will use a Pure Perl library to convert the \s-1SDO_GEOMETRY\s0
data into a \s-1WKT\s0 representation, the translation is done on Ora2Pg side.
This is a work in progress, please validate your exported data geometries
before use. Default spatial object extraction type is \s-1INTERNAL.\s0
.IP "\s-1POSTGIS_SCHEMA\s0" 4
.IX Item "POSTGIS_SCHEMA"
Use this directive to add a specific schema to the search path to look
for PostGis functions.
.IP "\s-1ST_SRID_FUNCTION\s0" 4
.IX Item "ST_SRID_FUNCTION"
Oracle function to use to extract the srid from ST_Geometry meta information.
Default: \s-1ST_SRID,\s0 for example it should be set to sde.st_srid for ArcSDE.
.IP "\s-1ST_DIMENSION_FUNCTION\s0" 4
.IX Item "ST_DIMENSION_FUNCTION"
Oracle function to use to extract the dimension from ST_Geometry meta
information. Default: \s-1ST_DIMENSION,\s0 for example it should be set to
sde.st_dimention for ArcSDE.
.IP "\s-1ST_GEOMETRYTYPE_FUNCTION\s0" 4
.IX Item "ST_GEOMETRYTYPE_FUNCTION"
Oracle function to use to extract the geometry type from a ST_Geometry column
Default: \s-1ST_GEOMETRYTYPE,\s0 for example it should be set to sde.st_geometrytype
for ArcSDE.
.IP "\s-1ST_ASBINARY_FUNCTION\s0" 4
.IX Item "ST_ASBINARY_FUNCTION"
Oracle function to used to convert an ST_Geometry value into \s-1WKB\s0 format.
Default: \s-1ST_ASBINARY,\s0 for example it should be set to sde.st_asbinary for
ArcSDE.
.IP "\s-1ST_ASTEXT_FUNCTION\s0" 4
.IX Item "ST_ASTEXT_FUNCTION"
Oracle function to used to convert an ST_Geometry value into \s-1WKT\s0 format.
Default: \s-1ST_ASTEXT,\s0 for example it should be set to sde.st_astext for
ArcSDE.
.SS "PostgreSQL Import"
.IX Subsection "PostgreSQL Import"
By default conversion to PostgreSQL format is written to file 'output.sql'.
The command:
.PP
.Vb 1
\&        psql mydb < output.sql
.Ve
.PP
will import content of file output.sql into PostgreSQL mydb database.
.IP "\s-1DATA_LIMIT\s0" 4
.IX Item "DATA_LIMIT"
When you are performing \s-1INSERT/COPY\s0 export Ora2Pg proceed by chunks of \s-1DATA_LIMIT\s0
tuples for speed improvement. Tuples are stored in memory before being written
to disk, so if you want speed and have enough system resources you can grow
this limit to an upper value for example: 100000 or 1000000. Before release 7.0
a value of 0 mean no limit so that all tuples are stored in memory before being
flushed to disk. In 7.x branch this has been remove and chunk will be set to the
default: 10000
.IP "\s-1BLOB_LIMIT\s0" 4
.IX Item "BLOB_LIMIT"
When Ora2Pg detect a table with some \s-1BLOB\s0 it will automatically reduce the
value of this directive by dividing it by 10 until his value is below 1000.
You can control this value by setting \s-1BLOB_LIMIT.\s0 Exporting \s-1BLOB\s0 use lot of
resources, setting it to a too high value can produce \s-1OOM.\s0
.IP "\s-1CLOB_AS_BLOB\s0" 4
.IX Item "CLOB_AS_BLOB"
Apply same behavior on \s-1CLOB\s0 than \s-1BLOB\s0 with \s-1BLOB_LIMIT\s0 settings. This is
especially useful if you have large \s-1CLOB\s0 data. Default: enabled
.IP "\s-1OUTPUT\s0" 4
.IX Item "OUTPUT"
The Ora2Pg output filename can be changed with this directive. Default value is
output.sql. if you set the file name with extension .gz or .bz2 the output will
be automatically compressed. This require that the Compress::Zlib Perl module
is installed if the filename extension is .gz and that the bzip2 system command
is installed for the .bz2 extension.
.IP "\s-1OUTPUT_DIR\s0" 4
.IX Item "OUTPUT_DIR"
Since release 7.0, you can define a base directory where the file will be written.
The directory must exists.
.IP "\s-1BZIP2\s0" 4
.IX Item "BZIP2"
This directive allows you to specify the full path to the bzip2 program if it
can not be found in the \s-1PATH\s0 environment variable.
.IP "\s-1FILE_PER_CONSTRAINT\s0" 4
.IX Item "FILE_PER_CONSTRAINT"
Allow object constraints to be saved in a separate file during schema export.
The file will be named \s-1CONSTRAINTS_OUTPUT,\s0 where \s-1OUTPUT\s0 is the value of the
corresponding configuration directive. You can use .gz xor .bz2 extension to
enable compression. Default is to save all data in the \s-1OUTPUT\s0 file. This
directive is usable only with \s-1TABLE\s0 export type.
.Sp
The constraints can be imported quickly into PostgreSQL using the \s-1LOAD\s0 export
type to parallelize their creation over multiple (\-j or \s-1JOBS\s0) connections.
.IP "\s-1FILE_PER_INDEX\s0" 4
.IX Item "FILE_PER_INDEX"
Allow indexes to be saved in a separate file during schema export. The file
will be named \s-1INDEXES_OUTPUT,\s0 where \s-1OUTPUT\s0 is the value of the corresponding
configuration directive. You can use .gz xor .bz2 file extension to enable
compression. Default is to save all data in the \s-1OUTPUT\s0 file. This directive
is usable only with \s-1TABLE AND TABLESPACE\s0 export type. With the \s-1TABLESPACE\s0
export, it is used to write \*(L"\s-1ALTER INDEX ... TABLESPACE ...\*(R"\s0 into a separate
file named \s-1TBSP_INDEXES_OUTPUT\s0 that can be loaded at end of the migration after
the indexes creation to move the indexes.
.Sp
The indexes can be imported quickly into PostgreSQL using the \s-1LOAD\s0 export
type to parallelize their creation over multiple (\-j or \s-1JOBS\s0) connections.
.IP "\s-1FILE_PER_FKEYS\s0" 4
.IX Item "FILE_PER_FKEYS"
Allow foreign key declaration to be saved in a separate file during
schema export. By default foreign keys are exported into the main
output file or in the CONSTRAINT_output.sql file. When enabled foreign
keys will be exported into a file named FKEYS_output.sql
.IP "\s-1FILE_PER_TABLE\s0" 4
.IX Item "FILE_PER_TABLE"
Allow data export to be saved in one file per table/view. The files will be
named as tablename_OUTPUT, where \s-1OUTPUT\s0 is the value of the corresponding
configuration directive. You can still use .gz xor .bz2 extension in the \s-1OUTPUT\s0
directive to enable compression. Default 0 will save all data in one file, set
it to 1 to enable this feature. This is usable only during \s-1INSERT\s0 or \s-1COPY\s0 export
type.
.IP "\s-1FILE_PER_FUNCTION\s0" 4
.IX Item "FILE_PER_FUNCTION"
Allow functions, procedures and triggers to be saved in one file per object.
The files will be named as objectname_OUTPUT. Where \s-1OUTPUT\s0 is the value of the
corresponding configuration directive. You can still use .gz xor .bz2 extension
in the \s-1OUTPUT\s0 directive to enable compression. Default 0 will save all in one
single file, set it to 1 to enable this feature. This is usable only during the
corresponding export type, the package body export has a special behavior.
.Sp
When export type is \s-1PACKAGE\s0 and you've enabled this directive, Ora2Pg will
create a directory per package, named with the lower case name of the package,
and will create one file per function/procedure into that directory. If the
configuration directive is not enabled, it will create one file per package as
packagename_OUTPUT, where \s-1OUTPUT\s0 is the value of the corresponding directive.
.IP "\s-1TRUNCATE_TABLE\s0" 4
.IX Item "TRUNCATE_TABLE"
If this directive is set to 1, a \s-1TRUNCATE TABLE\s0 instruction will be add before
loading data. This is usable only during \s-1INSERT\s0 or \s-1COPY\s0 export type.
.Sp
When activated, the instruction will be added only if there's no global \s-1DELETE\s0
clause or not one specific to the current table (see below).
.IP "\s-1DELETE\s0" 4
.IX Item "DELETE"
Support for include a \s-1DELETE FROM ... WHERE\s0 clause filter before importing
data and perform a delete of some lines instead of truncating tables.
Value is construct as follow: TABLE_NAME[\s-1DELETE_WHERE_CLAUSE\s0], or
if you have only one where clause for all tables just put the delete
clause as single value. Both are possible too. Here are some examples:
.Sp
.Vb 3
\&        DELETE  1=1    # Apply to all tables and delete all tuples
\&        DELETE  TABLE_TEST[ID1=\*(Aq001\*(Aq]   # Apply only on table TABLE_TEST
\&        DELETE  TABLE_TEST[ID1=\*(Aq001\*(Aq OR ID1=\*(Aq002] DATE_CREATE > \*(Aq2001\-01\-01\*(Aq TABLE_INFO[NAME=\*(Aqtest\*(Aq]
.Ve
.Sp
The last applies two different delete where clause on tables \s-1TABLE_TEST\s0 and
\&\s-1TABLE_INFO\s0 and a generic delete where clause on \s-1DATE_CREATE\s0 to all other tables.
If \s-1TRUNCATE_TABLE\s0 is enabled it will be applied to all tables not covered by
the \s-1DELETE\s0 definition.
.Sp
These \s-1DELETE\s0 clauses might be useful with regular \*(L"updates\*(R".
.IP "\s-1STOP_ON_ERROR\s0" 4
.IX Item "STOP_ON_ERROR"
Set this parameter to 0 to not include the call to \eset \s-1ON_ERROR_STOP ON\s0 in
all \s-1SQL\s0 scripts generated by Ora2Pg. By default this order is always present
so that the script will immediately abort when an error is encountered.
.IP "\s-1COPY_FREEZE\s0" 4
.IX Item "COPY_FREEZE"
Enable this directive to use \s-1COPY FREEZE\s0 instead of a simple \s-1COPY\s0 to
export data with rows already frozen. This is intended as a performance
option for initial data loading. Rows will be frozen only if the table
being loaded has been created or truncated in the current sub-transaction.
This will only work with export to file and when \-J or \s-1ORACLE_COPIES\s0 is
not set or default to 1. It can be used with direct import into PostgreSQL
under the same condition but \-j or \s-1JOBS\s0 must also be unset or default to 1.
.IP "\s-1CREATE_OR_REPLACE\s0" 4
.IX Item "CREATE_OR_REPLACE"
By default Ora2Pg uses \s-1CREATE OR REPLACE\s0 in functions and views \s-1DDL,\s0 if you
 need not to override existing functions or views disable this configuration
directive, \s-1DDL\s0 will not include \s-1OR REPLACE.\s0
.IP "\s-1DROP_IF_EXISTS\s0" 4
.IX Item "DROP_IF_EXISTS"
To add a \s-1DROP\s0 <\s-1OBJECT\s0> \s-1IF EXISTS\s0 before creating the object, enable
this directive. Can be useful in an iterative work. Default is disabled.
.IP "\s-1EXPORT_GTT\s0" 4
.IX Item "EXPORT_GTT"
PostgreSQL do not supports Global Temporary Table natively but you can use
the pgtt extension to emulate this behavior. Enable this directive to export
global temporary table.
.IP "\s-1NO_HEADER\s0" 4
.IX Item "NO_HEADER"
Enabling this directive will prevent Ora2Pg to print his header into
output files. Only the translated code will be written.
.IP "\s-1PSQL_RELATIVE_PATH\s0" 4
.IX Item "PSQL_RELATIVE_PATH"
By default Ora2Pg use \ei psql command to execute generated \s-1SQL\s0 files
if you want to use a relative path following the script execution file
enabling this option will use \eir. See psql help for more information.
.IP "\s-1DATA_VALIDATION_ROWS\s0" 4
.IX Item "DATA_VALIDATION_ROWS"
Number of rows that must be retrieved on both side for data validation.
Default it to compare the 10000 first rows. A value of 0 mean compare
all rows.
.IP "\s-1DATA_VALIDATION_ORDERING\s0" 4
.IX Item "DATA_VALIDATION_ORDERING"
Order of rows between both sides are different once the data have been
modified. In this case data must be ordered using a primary key or a
unique index, that mean that a table without such object can not be
compared. If the validation is done just after the data migration without
any data modification the validation can be done on all tables without any
ordering.
.IP "\s-1DATA_VALIDATION_ERROR\s0" 4
.IX Item "DATA_VALIDATION_ERROR"
Stop validating data from a table after a certain amount of row mistmatch.
Default is to stop after 10 rows validation errors.
.IP "\s-1TRANSFORM_VALUE\s0" 4
.IX Item "TRANSFORM_VALUE"
Use this directive to precise which transformation should be applied to a
column when exporting data. Value must be a semicolon separated list of
.Sp
.Vb 1
\&   TABLE[COLUMN_NAME, <replace code in SELECT target list>]
.Ve
.Sp
For example to replace string 'Oracle' by 'PostgreSQL' in a varchar2 column use the following.
.Sp
.Vb 1
\&   TRANSFORM_VALUE   ERROR_LOG_SAMPLE[DBMS_TYPE:regexp_replace("DBMS_TYPE",\*(AqOracle\*(Aq,\*(AqPostgreSQL\*(Aq)]
.Ve
.Sp
or to replace all Oracle \fBchar\fR\|(0) in a string by a space character:
.Sp
.Vb 1
\&    TRANSFORM_VALUE   CLOB_TABLE[CHARDATA:translate("CHARDATA", chr(0), \*(Aq \*(Aq)]
.Ve
.Sp
The expression will be applied in the \s-1SQL\s0 statemeent used to extract data
from the source database.
.PP
When using Ora2Pg export type \s-1INSERT\s0 or \s-1COPY\s0 to dump data to file and that
\&\s-1FILE_PER_TABLE\s0 is enabled, you will be warned that Ora2Pg will not export
data again if the file already exists. This is to prevent downloading twice
table with huge amount of data. To force the download of data from these tables
you have to remove the existing output file first.
.PP
If you want to import data on the fly to the PostgreSQL database you have three
configuration directives to set the PostgreSQL database connection. This is only
possible with \s-1COPY\s0 or \s-1INSERT\s0 export type as for database schema there's no real
interest to do that.
.IP "\s-1PG_DSN\s0" 4
.IX Item "PG_DSN"
Use this directive to set the PostgreSQL data source namespace using DBD::Pg
Perl module as follow:
.Sp
.Vb 1
\&        dbi:Pg:dbname=pgdb;host=localhost;port=5432
.Ve
.Sp
will connect to database 'pgdb' on localhost at tcp port 5432.
.Sp
Note that this directive is only used for data export, other export need to
be imported manually through the use og psql or any other PostgreSQL client.
.Sp
To use \s-1SSL\s0 encrypted connection you must add sslmode=require to the connection
string like follow:
.Sp
.Vb 1
\&        dbi:Pg:dbname=pgdb;host=localhost;port=5432;sslmode=require
.Ve
.IP "\s-1PG_USER\s0 and \s-1PG_PWD\s0" 4
.IX Item "PG_USER and PG_PWD"
These two directives are used to set the login user and password.
.Sp
If you do not supply a credential with \s-1PG_PWD\s0 and you have installed the
Term::ReadKey Perl module, Ora2Pg will ask for the password interactively. If
\&\s-1PG_USER\s0 is not set it will be asked interactively too.
.IP "\s-1SYNCHRONOUS_COMMIT\s0" 4
.IX Item "SYNCHRONOUS_COMMIT"
Specifies whether transaction commit will wait for \s-1WAL\s0 records to be written
to disk before the command returns a \*(L"success\*(R" indication to the client. This
is the equivalent to set synchronous_commit directive of postgresql.conf file.
This is only used when you load data directly to PostgreSQL, the default is
off to disable synchronous commit to gain speed at writing data. Some modified
version of PostgreSQL, like greenplum, do not have this setting, so in this
set this directive to 1, ora2pg will not try to change the setting.
.IP "\s-1PG_INITIAL_COMMAND\s0" 4
.IX Item "PG_INITIAL_COMMAND"
This directive can be used to send an initial command to PostgreSQL, just after
the connection. For example to set some session parameters. This directive can
be used multiple times.
.IP "\s-1INSERT_ON_CONFLICT\s0" 4
.IX Item "INSERT_ON_CONFLICT"
When enabled this instruct Ora2Pg to add an \s-1ON CONFLICT DO NOTHING\s0 clause to all
\&\s-1INSERT\s0 statements generated for this type of data export.
.SS "Column type control"
.IX Subsection "Column type control"
.IP "\s-1PG_NUMERIC_TYPE\s0" 4
.IX Item "PG_NUMERIC_TYPE"
If set to 1 replace portable numeric type into PostgreSQL internal type.
Oracle data type \s-1NUMBER\s0(p,s) is approximatively converted to real and
float PostgreSQL data type. If you have monetary fields or don't want
rounding issues with the extra decimals you should preserve the same
numeric(p,s) PostgreSQL data type. Do that only if you need exactness
because using numeric(p,s) is slower than using real or double.
.IP "\s-1PG_INTEGER_TYPE\s0" 4
.IX Item "PG_INTEGER_TYPE"
If set to 1 replace portable numeric type into PostgreSQL internal type.
Oracle data type \s-1NUMBER\s0(p) or \s-1NUMBER\s0 are converted to smallint, integer
or bigint PostgreSQL data type following the value of the precision. If
\&\s-1NUMBER\s0 without precision are set to \s-1DEFAULT_NUMERIC\s0 (see below).
.IP "\s-1DEFAULT_NUMERIC\s0" 4
.IX Item "DEFAULT_NUMERIC"
\&\s-1NUMBER\s0 without precision are converted by default to bigint only if
\&\s-1PG_INTEGER_TYPE\s0 is true. You can overwrite this value to any \s-1PG\s0 type,
like integer or float.
.IP "\s-1DATA_TYPE\s0" 4
.IX Item "DATA_TYPE"
If you're experiencing any problem in data type schema conversion with this
directive you can take full control of the correspondence between Oracle and
PostgreSQL types to redefine data type translation used in Ora2pg. The syntax
is a comma-separated list of \*(L"Oracle datatype:Postgresql datatype\*(R". Here are
the default list used:
.Sp
.Vb 1
\&        DATA_TYPE       VARCHAR2:varchar,NVARCHAR2:varchar,NVARCHAR:varchar,NCHAR:char,DATE:timestamp(0),LONG:text,LONG RAW:bytea,CLOB:text,NCLOB:text,BLOB:bytea,BFILE:bytea,RAW(16):uuid,RAW(32):uuid,RAW:bytea,UROWID:oid,ROWID:oid,FLOAT:double precision,DEC:decimal,DECIMAL:decimal,DOUBLE PRECISION:double precision,INT:integer,INTEGER:integer,REAL:real,SMALLINT:smallint,BINARY_FLOAT:double precision,BINARY_DOUBLE:double precision,TIMESTAMP:timestamp,XMLTYPE:xml,BINARY_INTEGER:integer,PLS_INTEGER:integer,TIMESTAMP WITH TIME ZONE:timestamp with time zone,TIMESTAMP WITH LOCAL TIME ZONE:timestamp with time zone
.Ve
.Sp
The directive and the list definition must be a single line.
.Sp
Note that when a \s-1RAW\s0(16) and \s-1RAW\s0(32) columns is found or that the \s-1RAW\s0 column
has \*(L"\s-1\fBSYS_GUID\s0()\fR\*(R" as default value Ora2Pg will automatically translate the
type of the column into uuid which might be the right translation in most of
the case. In this case data will be automatically migrated as PostgreSQL uuid
data type provided by the \*(L"uuid-ossp\*(R" extension.
.Sp
If you want to replace a type with a precision and scale you need to escape
the coma with a backslash. For example, if you want to replace all \s-1NUMBER\s0(*,0)
into bigint instead of numeric(38) add the following:
.Sp
.Vb 1
\&       DATA_TYPE       NUMBER(*\e,0):bigint
.Ve
.Sp
You don't have to recopy all default type conversion but just the one you want
to rewrite.
.Sp
There's a special case with \s-1BFILE\s0 when they are converted to type \s-1TEXT,\s0 they
will just contains the full path to the external file. If you set the
destination type to \s-1BYTEA,\s0 the default, Ora2Pg will export the content of the
\&\s-1BFILE\s0 as bytea. The third case is when you set the destination type to \s-1EFILE,\s0
in this case, Ora2Pg will export it as an \s-1EFILE\s0 record: (\s-1DIRECTORY, FILENAME\s0).
Use the \s-1DIRECTORY\s0 export type to export the existing directories as well as
privileges on those directories.
.Sp
There's no \s-1SQL\s0 function available to retrieve the path to the \s-1BFILE.\s0 Ora2Pg
have to create one using the \s-1DBMS_LOB\s0 package.
.Sp
.Vb 10
\&        CREATE OR REPLACE FUNCTION ora2pg_get_bfilename( p_bfile IN BFILE )
\&        RETURN VARCHAR2
\&        AS
\&            l_dir   VARCHAR2(4000);
\&            l_fname VARCHAR2(4000);
\&            l_path  VARCHAR2(4000);
\&        BEGIN
\&            dbms_lob.FILEGETNAME( p_bfile, l_dir, l_fname );
\&            SELECT directory_path INTO l_path FROM all_directories
\&                WHERE directory_name = l_dir;
\&            l_dir := rtrim(l_path,\*(Aq/\*(Aq);
\&            RETURN l_dir || \*(Aq/\*(Aq || l_fname;
\&        END;
.Ve
.Sp
This function is only created if Ora2Pg found a table with a \s-1BFILE\s0 column and
that the destination type is \s-1TEXT.\s0 The function is dropped at the end of the
export. This concern both, \s-1COPY\s0 and \s-1INSERT\s0 export type.
.Sp
There's no \s-1SQL\s0 function available to retrieve \s-1BFILE\s0 as an \s-1EFILE\s0 record, then
Ora2Pg have to create one using the \s-1DBMS_LOB\s0 package.
.Sp
.Vb 9
\&        CREATE OR REPLACE FUNCTION ora2pg_get_efile( p_bfile IN BFILE )
\&        RETURN VARCHAR2
\&        AS
\&            l_dir   VARCHAR2(4000);
\&            l_fname VARCHAR2(4000);
\&        BEGIN
\&            dbms_lob.FILEGETNAME( p_bfile, l_dir, l_fname );
\&            RETURN \*(Aq(\*(Aq || l_dir || \*(Aq,\*(Aq || l_fnamei || \*(Aq)\*(Aq;
\&        END;
.Ve
.Sp
This function is only created if Ora2Pg found a table with a \s-1BFILE\s0 column and
that the destination type is \s-1EFILE.\s0 The function is dropped at the end of the
export. This concern both, \s-1COPY\s0 and \s-1INSERT\s0 export type.
.Sp
To set the destination type, use the \s-1DATA_TYPE\s0 configuration directive:
.Sp
.Vb 1
\&        DATA_TYPE       BFILE:EFILE
.Ve
.Sp
for example.
.Sp
The \s-1EFILE\s0 type is a user defined type created by the PostgreSQL extension
external_file that can be found here: https://github.com/darold/external_file
This is a port of the \s-1BFILE\s0 Oracle type to PostgreSQL.
.Sp
There's no \s-1SQL\s0 function available to retrieve the content of a \s-1BFILE.\s0 Ora2Pg
have to create one using the \s-1DBMS_LOB\s0 package.
.Sp
.Vb 10
\&        CREATE OR REPLACE FUNCTION ora2pg_get_bfile( p_bfile IN BFILE ) RETURN
\&        BLOB
\&          AS
\&                filecontent BLOB := NULL;
\&                src_file BFILE := NULL;
\&                l_step PLS_INTEGER := 12000;
\&                l_dir   VARCHAR2(4000);
\&                l_fname VARCHAR2(4000);
\&                offset NUMBER := 1;
\&          BEGIN
\&            IF p_bfile IS NULL THEN
\&              RETURN NULL;
\&            END IF;
\&
\&            DBMS_LOB.FILEGETNAME( p_bfile, l_dir, l_fname );
\&            src_file := BFILENAME( l_dir, l_fname );
\&            IF src_file IS NULL THEN
\&                RETURN NULL;
\&            END IF;
\&
\&            DBMS_LOB.FILEOPEN(src_file, DBMS_LOB.FILE_READONLY);
\&            DBMS_LOB.CREATETEMPORARY(filecontent, true);
\&            DBMS_LOB.LOADBLOBFROMFILE (filecontent, src_file, DBMS_LOB.LOBMAXSIZE, offset, offset);
\&            DBMS_LOB.FILECLOSE(src_file);
\&            RETURN filecontent;
\&        END;
.Ve
.Sp
This function is only created if Ora2Pg found a table with a \s-1BFILE\s0 column and
that the destination type is bytea (the default). The function is dropped at
the end of the export. This concern both, \s-1COPY\s0 and \s-1INSERT\s0 export type.
.Sp
About the \s-1ROWID\s0 and \s-1UROWID,\s0 they are converted into \s-1OID\s0 by \*(L"logical\*(R" default
but this will through an error at data import. There is no equivalent data type
so you might want to use the \s-1DATA_TYPE\s0 directive to change the corresponding
type in PostgreSQL. You should consider replacing this data type by a bigserial
(autoincremented sequence), text or uuid data type.
.IP "\s-1MODIFY_TYPE\s0" 4
.IX Item "MODIFY_TYPE"
Sometimes you need to force the destination type, for example a column
exported as timestamp by Ora2Pg can be forced into type date. Value is
a comma-separated list of \s-1TABLE:COLUMN:TYPE\s0 structure. If you need to use
comma or space inside type definition you will have to backslash them.
.Sp
.Vb 1
\&        MODIFY_TYPE     TABLE1:COL3:varchar,TABLE1:COL4:decimal(9\e,6)
.Ve
.Sp
Type of table1.col3 will be replaced by a varchar and table1.col4 by
a decimal with precision and scale.
.Sp
If the column's type is a user defined type Ora2Pg will autodetect the
composite type and will export its data using \s-1\fBROW\s0()\fR. Some Oracle user
defined types are just array of a native type, in this case you may want
to transform this column in simple array of a PostgreSQL native type.
To do so, just redefine the destination type as wanted and Ora2Pg will
also transform the data as an array. For example, with the following
definition in Oracle:
.Sp
.Vb 7
\&        CREATE OR REPLACE TYPE mem_type IS VARRAY(10) of VARCHAR2(15);
\&        CREATE TABLE club (Name VARCHAR2(10),
\&                Address VARCHAR2(20),
\&                City VARCHAR2(20),
\&                Phone VARCHAR2(8),
\&                Members mem_type
\&        );
.Ve
.Sp
custom type \*(L"mem_type\*(R" is just a string array and can be translated into
the following in PostgreSQL:
.Sp
.Vb 7
\&        CREATE TABLE club (
\&                name varchar(10),
\&                address varchar(20),
\&                city varchar(20),
\&                phone varchar(8),
\&                members text[]
\&        ) ;
.Ve
.Sp
To do so, just use the directive as follow:
.Sp
.Vb 1
\&        MODIFY_TYPE     CLUB:MEMBERS:text[]
.Ve
.Sp
Ora2Pg will take care to transform all data of this column in the correct
format. Only arrays of characters and numerics types are supported.
.IP "\s-1TO_NUMBER_CONVERSION\s0" 4
.IX Item "TO_NUMBER_CONVERSION"
By default Oracle call to function \s-1TO_NUMBER\s0 will be translated as a cast
into numeric. For example, \s-1TO_NUMBER\s0('10.1234') is converted into PostgreSQL
call to_number('10.1234')::numeric. If you want you can cast the call to integer
or bigint by changing the value of the configuration directive. If you need
better control of the format, just set it as value, for example:
   \s-1TO_NUMBER_CONVERSION\s0    99999999999999999999.9999999999
will convert the code above as:
   \s-1TO_NUMBER\s0('10.1234', '99999999999999999999.9999999999')
Any value of the directive that it is not numeric, integer or bigint will
be taken as a mask format. If set to none, no conversion will be done.
.IP "\s-1VARCHAR_TO_TEXT\s0" 4
.IX Item "VARCHAR_TO_TEXT"
By default varchar2 without size constraint are tranlated into text. If you
want to keep the varchar name, disable this directive.
.IP "\s-1FORCE_IDENTITY_BIGINT\s0" 4
.IX Item "FORCE_IDENTITY_BIGINT"
Usually identity column must be bigint to correspond to an auto increment
sequence so Ora2Pg always force it to be a bigint. If, for any reason you
want Ora2Pg to respect the \s-1DATA_TYPE\s0 you have set for identity column then
disable this directive.
.IP "\s-1TO_CHAR_NOTIMEZONE\s0" 4
.IX Item "TO_CHAR_NOTIMEZONE"
If you want Ora2Pg to remove any timezone information into the format part
of the \s-1\fBTO_CHAR\s0()\fR function, enable this directive. Disabled by default.
.SS "Taking export under control"
.IX Subsection "Taking export under control"
The following other configuration directives interact directly with the export process and give you fine granularity in database export control.
.IP "\s-1SKIP\s0" 4
.IX Item "SKIP"
For \s-1TABLE\s0 export you may not want to export all schema constraints, the \s-1SKIP\s0
configuration directive allows you to specify a space-separated list of
constraints that should not be exported. Possible values are:
.Sp
.Vb 5
\&        \- fkeys: turn off foreign key constraints
\&        \- pkeys: turn off primary keys
\&        \- ukeys: turn off unique column constraints
\&        \- indexes: turn off all other index types
\&        \- checks: turn off check constraints
.Ve
.Sp
For example:
.Sp
.Vb 1
\&        SKIP    indexes,checks
.Ve
.Sp
will removed indexes and check constraints from export.
.IP "\s-1PKEY_IN_CREATE\s0" 4
.IX Item "PKEY_IN_CREATE"
Enable this directive if you want to add primary key definition inside the
create table statement. If disabled (the default) primary key definition
will be added with an alter table statement. Enable it if you are exporting
to GreenPlum PostgreSQL database.
.IP "\s-1KEEP_PKEY_NAMES\s0" 4
.IX Item "KEEP_PKEY_NAMES"
By default names of the primary and unique key in the source Oracle database
are ignored and key names are autogenerated in the target PostgreSQL database
with the PostgreSQL internal default naming rules. If you want to preserve
Oracle primary and unique key names set this option to 1.
.IP "\s-1FKEY_ADD_UPDATE\s0" 4
.IX Item "FKEY_ADD_UPDATE"
This directive allows you to add an \s-1ON UPDATE CASCADE\s0 option to a foreign
key when a \s-1ON DELETE CASCADE\s0 is defined or always. Oracle do not support
this feature, you have to use trigger to operate the \s-1ON UPDATE CASCADE.\s0
As PostgreSQL has this feature, you can choose how to add the foreign
key option. There are three values to this directive: never, the default
that mean that foreign keys will be declared exactly like in Oracle.
The second value is delete, that mean that the \s-1ON UPDATE CASCADE\s0 option
will be added only if the \s-1ON DELETE CASCADE\s0 is already defined on the
foreign Keys. The last value, always, will force all foreign keys to be
defined using the update option.
.IP "\s-1FKEY_DEFERRABLE\s0" 4
.IX Item "FKEY_DEFERRABLE"
When exporting tables, Ora2Pg normally exports constraints as they are, if they
are non-deferrable they are exported as non-deferrable. However, non-deferrable
constraints will probably cause problems when attempting to import data to Pg.
The \s-1FKEY_DEFERRABLE\s0 option set to 1 will cause all foreign key constraints to
be exported as deferrable.
.IP "\s-1DEFER_FKEY\s0" 4
.IX Item "DEFER_FKEY"
In addition to exporting data when the \s-1DEFER_FKEY\s0 option set to 1, it will add
a command to defer all foreign key constraints during data export and
the import will be done in a single transaction. This will work only if
foreign keys have been exported as deferrable and you are not using direct
import to PostgreSQL (\s-1PG_DSN\s0 is not defined). Constraints will then be
checked at the end of the transaction.
.Sp
This directive can also be enabled if you want to force all foreign keys
to be created as deferrable and initially deferred during schema export
(\s-1TABLE\s0 export type).
.IP "\s-1DROP_FKEY\s0" 4
.IX Item "DROP_FKEY"
If deferring foreign keys is not possible due to the amount of data in a
single transaction, you've not exported foreign keys as deferrable or you
are using direct import to PostgreSQL, you can use the \s-1DROP_FKEY\s0 directive.
.Sp
It will drop all foreign keys before all data import and recreate them at
the end of the import.
.IP "\s-1DROP_INDEXES\s0" 4
.IX Item "DROP_INDEXES"
This directive allows you to gain lot of speed improvement during data import
by removing all indexes that are not an automatic index (indexes of primary
keys) and recreate them at the end of data import. Of course it is far better
to not import indexes and constraints before having imported all data.
.IP "\s-1DISABLE_TRIGGERS\s0" 4
.IX Item "DISABLE_TRIGGERS"
This directive is used to disable triggers on all tables in \s-1COPY\s0 or \s-1INSERT\s0
export modes. Available values are \s-1USER\s0 (disable user-defined triggers only)
and \s-1ALL\s0 (includes \s-1RI\s0 system triggers). Default is 0: do not add \s-1SQL\s0 statements
to disable trigger before data import.
.Sp
If you want to disable triggers during data migration, set the value to
\&\s-1USER\s0 if your are connected as non superuser and \s-1ALL\s0 if you are connected
as PostgreSQL superuser. A value of 1 is equal to \s-1USER.\s0
.IP "\s-1DISABLE_SEQUENCE\s0" 4
.IX Item "DISABLE_SEQUENCE"
If set to 1 it disables alter of sequences on all tables during \s-1COPY\s0 or \s-1INSERT\s0 export
mode. This is used to prevent the update of sequence during data migration.
Default is 0, alter sequences.
.IP "\s-1NOESCAPE\s0" 4
.IX Item "NOESCAPE"
By default all data that are not of type date or time are escaped. If you
experience any problem with that you can set it to 1 to disable character
escaping during data export. This directive is only used during a \s-1COPY\s0 export.
See \s-1STANDARD_CONFORMING_STRINGS\s0 for enabling/disabling escape with \s-1INSERT\s0
statements.
.IP "\s-1STANDARD_CONFORMING_STRINGS\s0" 4
.IX Item "STANDARD_CONFORMING_STRINGS"
This controls whether ordinary string literals ('...') treat backslashes
literally, as specified in \s-1SQL\s0 standard. This was the default before Ora2Pg
v8.5 so that all strings was escaped first, now this is currently on, causing
Ora2Pg to use the escape string syntax (E'...') if this parameter is not
set to 0. This is the exact behavior of the same option in PostgreSQL.
This directive is only used during data export to build \s-1INSERT\s0 statements.
See \s-1NOESCAPE\s0 for enabling/disabling escape in \s-1COPY\s0 statements.
.IP "\s-1TRIM_TYPE\s0" 4
.IX Item "TRIM_TYPE"
If you want to convert \s-1CHAR\s0(n) from Oracle into varchar(n) or text on PostgreSQL
using directive \s-1DATA_TYPE,\s0 you might want to do some trimming on the data. By
default Ora2Pg will auto-detect this conversion and remove any whitespace at both
leading and trailing position. If you just want to remove the leadings character
set the value to \s-1LEADING.\s0 If you just want to remove the trailing character, set
the value to \s-1TRAILING.\s0 Default value is \s-1BOTH.\s0
.IP "\s-1TRIM_CHAR\s0" 4
.IX Item "TRIM_CHAR"
The default trimming character is space, use this directive if you need to
change the character that will be removed. For example, set it to \- if you
have leading \- in the char(n) field. To use space as trimming charger, comment
this directive, this is the default value.
.IP "\s-1PRESERVE_CASE\s0" 4
.IX Item "PRESERVE_CASE"
If you want to preserve the case of Oracle object name set this directive to 1.
By default Ora2Pg will convert all Oracle object names to lower case. I do not
recommend to enable this unless you will always have to double-quote object
names on all your \s-1SQL\s0 scripts.
.IP "\s-1ORA_RESERVED_WORDS\s0" 4
.IX Item "ORA_RESERVED_WORDS"
Allow escaping of column name using Oracle reserved words. Value is a list of
comma-separated reserved word. Default: audit,comment,references.
.IP "\s-1USE_RESERVED_WORDS\s0" 4
.IX Item "USE_RESERVED_WORDS"
Enable this directive if you have table or column names that are a reserved
word for PostgreSQL. Ora2Pg will double quote the name of the object.
.IP "\s-1GEN_USER_PWD\s0" 4
.IX Item "GEN_USER_PWD"
Set this directive to 1 to replace default password by a random password for all
extracted user during a \s-1GRANT\s0 export.
.IP "\s-1PG_SUPPORTS_MVIEW\s0" 4
.IX Item "PG_SUPPORTS_MVIEW"
Since PostgreSQL 9.3, materialized view are supported with the \s-1SQL\s0 syntax
\&'\s-1CREATE MATERIALIZED VIEW\s0'. To force Ora2Pg to use the native PostgreSQL
support you must enable this configuration \- enable by default. If you want
to use the old style with table and a set of function, you should disable it.
.IP "\s-1PG_SUPPORTS_IFEXISTS\s0" 4
.IX Item "PG_SUPPORTS_IFEXISTS"
PostgreSQL version below 9.x do not support \s-1IF EXISTS\s0 in \s-1DDL\s0 statements.
Disabling the directive with value 0 will prevent Ora2Pg to add those
keywords in all generated statements. Default value is 1, enabled.
.IP "\s-1PG_VERSION\s0" 4
.IX Item "PG_VERSION"
Set the PostgreSQL major version number of the target database. Ex: 9.6 or 13
Default is current major version at time of a new release. This replace the
old and deprecadted PG_SUPPORTS_* configuration directives described bellow.
.IP "\s-1PG_SUPPORTS_ROLE\s0 (Deprecated)" 4
.IX Item "PG_SUPPORTS_ROLE (Deprecated)"
This option is deprecated since Ora2Pg release v7.3.
.Sp
By default Oracle roles are translated into PostgreSQL groups. If you have
PostgreSQL 8.1 or more consider the use of \s-1ROLES\s0 and set this directive to 1
to export roles.
.IP "\s-1PG_SUPPORTS_INOUT\s0 (Deprecated)" 4
.IX Item "PG_SUPPORTS_INOUT (Deprecated)"
This option is deprecated since Ora2Pg release v7.3.
.Sp
If set to 0, all \s-1IN, OUT\s0 or \s-1INOUT\s0 parameters will not be used into the generated
PostgreSQL function declarations (disable it for PostgreSQL database version
lower than 8.1), This is now enable by default.
.IP "\s-1PG_SUPPORTS_DEFAULT\s0" 4
.IX Item "PG_SUPPORTS_DEFAULT"
This directive enable or disable the use of default parameter value in function
export. Until PostgreSQL 8.4 such a default value was not supported, this feature
is now enable by default.
.IP "\s-1PG_SUPPORTS_WHEN\s0 (Deprecated)" 4
.IX Item "PG_SUPPORTS_WHEN (Deprecated)"
Add support to \s-1WHEN\s0 clause on triggers as PostgreSQL v9.0 now support it. This
directive is enabled by default, set it to 0 disable this feature.
.IP "\s-1PG_SUPPORTS_INSTEADOF\s0 (Deprecated)" 4
.IX Item "PG_SUPPORTS_INSTEADOF (Deprecated)"
Add support to \s-1INSTEAD OF\s0 usage on triggers (used with \s-1PG\s0 >= 9.1), if this
directive is disabled the \s-1INSTEAD OF\s0 triggers will be rewritten as Pg rules.
.IP "\s-1PG_SUPPORTS_CHECKOPTION\s0" 4
.IX Item "PG_SUPPORTS_CHECKOPTION"
When enabled, export views with \s-1CHECK OPTION.\s0 Disable it if you have PostgreSQL
version prior to 9.4. Default: 1, enabled.
.IP "\s-1PG_SUPPORTS_IFEXISTS\s0" 4
.IX Item "PG_SUPPORTS_IFEXISTS"
If disabled, do not export object with \s-1IF EXISTS\s0 statements.
Enabled by default.
.IP "\s-1PG_SUPPORTS_PARTITION\s0" 4
.IX Item "PG_SUPPORTS_PARTITION"
PostgreSQL version prior to 10.0 do not have native partitioning.
Enable this directive if you want to use declarative partitioning.
Enable by default.
.IP "\s-1PG_SUPPORTS_SUBSTR\s0" 4
.IX Item "PG_SUPPORTS_SUBSTR"
Some versions of PostgreSQL like Redshift doesn't support \fBsubstr()\fR
and it need to be replaced by a call to \fBsubstring()\fR. In this case,
disable it.
.IP "\s-1PG_SUPPORTS_NAMED_OPERATOR\s0" 4
.IX Item "PG_SUPPORTS_NAMED_OPERATOR"
Disable this directive if you are using \s-1PG\s0 < 9.5, \s-1PL/SQL\s0 operator used in
named parameter => will be replaced by PostgreSQL proprietary operator :=
Enable by default.
.IP "\s-1PG_SUPPORTS_IDENTITY\s0" 4
.IX Item "PG_SUPPORTS_IDENTITY"
Enable this directive if you have PostgreSQL >= 10 to use \s-1IDENTITY\s0 columns
instead of serial or bigserial data type. If \s-1PG_SUPPORTS_IDENTITY\s0 is disabled
and there is \s-1IDENTITY\s0 column in the Oracle table, they are exported as serial
or bigserial columns. When it is enabled they are exported as \s-1IDENTITY\s0 columns
like:
.Sp
.Vb 4
\&      CREATE TABLE identity_test_tab (
\&              id bigint GENERATED ALWAYS AS IDENTITY,
\&              description varchar(30)
\&      ) ;
.Ve
.Sp
If there is non default sequence options set in Oracle, they will be appended
after the \s-1IDENTITY\s0 keyword.
Additionally in both cases, Ora2Pg will create a file AUTOINCREMENT_output.sql
with a embedded function to update the associated sequences with the restart
value set to \*(L"\s-1SELECT\s0 max(colname)+1 \s-1FROM\s0 tablename\*(R". Of course this file must
be imported after data import otherwise sequence will be kept to start value.
Enabled by default.
.IP "\s-1PG_SUPPORTS_PROCEDURE\s0" 4
.IX Item "PG_SUPPORTS_PROCEDURE"
PostgreSQL v11 adds support of \s-1PROCEDURE,\s0 enable it if you use such version.
.IP "\s-1BITMAP_AS_GIN\s0" 4
.IX Item "BITMAP_AS_GIN"
Use btree_gin extension to create bitmap like index with pg >= 9.4
You will need to create the extension by yourself:
      create extension btree_gin;
Default is to create \s-1GIN\s0 index, when disabled, a btree index will be created
.IP "\s-1PG_BACKGROUND\s0" 4
.IX Item "PG_BACKGROUND"
Use pg_background extension to create an autonomous transaction instead
of using a dblink wrapper. With pg >= 9.5 only. Default is to use dblink.
See https://github.com/vibhorkum/pg_background about this extension.
.IP "\s-1DBLINK_CONN\s0" 4
.IX Item "DBLINK_CONN"
By default if you have an autonomous transaction translated using dblink
extension instead of pg_background the connection is defined using the
values set with \s-1PG_DSN, PG_USER\s0 and \s-1PG_PWD.\s0 If you want to fully override
the connection string use this directive as follow to set the connection
in the autonomous transaction wrapper function. For example:
.Sp
.Vb 1
\&        DBLINK_CONN    port=5432 dbname=pgdb host=localhost user=pguser password=pgpass
.Ve
.IP "\s-1LONGREADLEN\s0" 4
.IX Item "LONGREADLEN"
Use this directive to set the database handle's 'LongReadLen' attribute to a
value that will be the larger than the expected size of the LOBs. The default
is 1MB witch may not be enough to extract BLOBs or CLOBs. If the size of the
\&\s-1LOB\s0 exceeds the 'LongReadLen' DBD::Oracle will return a '\s-1ORA\-24345: A\s0 Truncation'
error. Default: 1023*1024 bytes.
.Sp
Take a look at this page to learn more: http://search.cpan.org/~pythian/DBD\-Oracle\-1.22/Oracle.pm#Data_Interface_for_Persistent_LOBs
.Sp
Important note: If you increase the value of this directive take care that
\&\s-1DATA_LIMIT\s0 will probably needs to be reduced. Even if you only have a 1MB blob,
trying to read 10000 of them (the default \s-1DATA_LIMIT\s0) all at once will require
10GB of memory. You may extract data from those table separately and set a
\&\s-1DATA_LIMIT\s0 to 500 or lower, otherwise you may experience some out of memory.
.IP "\s-1LONGTRUNKOK\s0" 4
.IX Item "LONGTRUNKOK"
If you want to bypass the '\s-1ORA\-24345: A\s0 Truncation' error, set this directive
to 1, it will truncate the data extracted to the LongReadLen value. Disable
by default so that you will be warned if your LongReadLen value is not high
enough.
.IP "\s-1USE_LOB_LOCATOR\s0" 4
.IX Item "USE_LOB_LOCATOR"
Disable this if you want to load full content of \s-1BLOB\s0 and \s-1CLOB\s0 and not use
\&\s-1LOB\s0 locators. In this case you will have to set \s-1LONGREADLEN\s0 to the right
value. Note that this will not improve speed of \s-1BLOB\s0 export as most of the
time is always consumed by the bytea escaping and in this case export is
done line by line and not by chunk of \s-1DATA_LIMIT\s0 rows. For more information
on how it works, see http://search.cpan.org/~pythian/DBD\-Oracle\-1.74/lib/DBD/Oracle.pm#Data_Interface_for_LOB_Locators
.Sp
Default is enabled, it use \s-1LOB\s0 locators.
.IP "\s-1LOB_CHUNK_SIZE\s0" 4
.IX Item "LOB_CHUNK_SIZE"
Oracle recommends reading from and writing to a \s-1LOB\s0 in batches using a
multiple of the \s-1LOB\s0 chunk size. This chunk size defaults to 8k (8192).
Recent tests shown that the best performances can be reach with higher
value like 512K or 4Mb.
.Sp
A quick benchmark with 30120 rows with different size of \s-1BLOB\s0 (200x5Mb,
19800x212k, 10000x942K, 100x17Mb, 20x156Mb), with DATA_LIMIT=100,
LONGREADLEN=170Mb and a total table size of 20GB gives:
.Sp
.Vb 4
\&       no lob locator  : 22m46,218s (1365 sec., avg: 22 recs/sec)
\&       chunk size 8k   : 15m50,886s (951 sec., avg: 31 recs/sec)
\&       chunk size 512k : 1m28,161s (88 sec., avg: 342 recs/sec)
\&       chunk size 4Mb  : 1m23,717s (83 sec., avg: 362 recs/sec)
.Ve
.Sp
In conclusion it can be more than 10 time faster with \s-1LOB_CHUNK_SIZE\s0 set
to 4Mb. Depending of the size of most \s-1BLOB\s0 you may want to adjust the value
here. For example if you have a majority of small lobs bellow 8K, using 8192
is better to not waste space. Default value for \s-1LOB_CHUNK_SIZE\s0 is 512000.
.IP "\s-1XML_PRETTY\s0" 4
.IX Item "XML_PRETTY"
Force the use \fBgetStringVal()\fR instead of \fBgetClobVal()\fR for \s-1XML\s0 data export. Default is 1,
enabled for backward compatibility. Set it to 0 to use extract method a la \s-1CLOB.\s0
Note that \s-1XML\s0 value extracted with \fBgetStringVal()\fR must not exceed \s-1VARCHAR2\s0 size
limit (4000) otherwise it will return an error.
.IP "\s-1ENABLE_MICROSECOND\s0" 4
.IX Item "ENABLE_MICROSECOND"
Set it to O if you want to disable export of millisecond from Oracle timestamp
columns. By default milliseconds are exported with the use of following format:
.Sp
.Vb 1
\&        \*(AqYYYY\-MM\-DD HH24:MI:SS.FF\*(Aq
.Ve
.Sp
Disabling will force the use of the following Oracle format:
.Sp
.Vb 1
\&        to_char(..., \*(AqYYYY\-MM\-DD HH24:MI:SS\*(Aq)
.Ve
.Sp
By default milliseconds are exported.
.IP "\s-1DISABLE_COMMENT\s0" 4
.IX Item "DISABLE_COMMENT"
Set this to 1 if you don't want to export comment associated to tables and
columns definition. Default is enabled.
.SS "Control MySQL export behavior"
.IX Subsection "Control MySQL export behavior"
.IP "\s-1MYSQL_PIPES_AS_CONCAT\s0" 4
.IX Item "MYSQL_PIPES_AS_CONCAT"
Enable this if double pipe and double ampersand (|| and &&) should not be
taken as equivalent to \s-1OR\s0 and \s-1AND.\s0 It depend of the variable \f(CW@sql_mode\fR,
Use it only if Ora2Pg fail on auto detecting this behavior.
.IP "\s-1MYSQL_INTERNAL_EXTRACT_FORMAT\s0" 4
.IX Item "MYSQL_INTERNAL_EXTRACT_FORMAT"
Enable this directive if you want \s-1\fBEXTRACT\s0()\fR replacement to use the internal
format returned as an integer, for example \s-1DD HH24:MM:SS\s0 will be replaced
with format; DDHH24MMSS::bigint, this depend of your apps usage.
.SS "Control \s-1SQL\s0 Server export behavior"
.IX Subsection "Control SQL Server export behavior"
.IP "\s-1DROP_ROWVERSION\s0" 4
.IX Item "DROP_ROWVERSION"
PostgreSQL has no equivalent to rowversion datatype and feature, if you want
to remove these useless columns, enable this directive. Columns of datatype
\&'rowversion' or 'timestamp' will not be exported.
.IP "\s-1CASE_INSENSITIVE_SEARCH\s0" 4
.IX Item "CASE_INSENSITIVE_SEARCH"
Emulate the same behavior of \s-1MSSQL\s0 with case insensitive search. If the value
is citext it will use the citext data type instead of char/varchar/text in
tables \s-1DDL\s0 (Ora2Pg will add a \s-1CHECK\s0 constraint for columns with a precision).
Instead of citext you can also set a collation name that will be used in the
columns definitions. To disable case insensitive search set it to: none.
.IP "\s-1SELECT_TOP\s0" 4
.IX Item "SELECT_TOP"
Append a \s-1TOP N\s0 clause to the \s-1SELECT\s0 command used to extract the data from
\&\s-1SQL\s0 Server. This is the equivalent to a \s-1WHERE ROWNUM\s0 < 1000 clause for Oracle.
.SS "Special options to handle character encoding"
.IX Subsection "Special options to handle character encoding"
.IP "\s-1NLS_LANG\s0 and \s-1NLS_NCHAR\s0" 4
.IX Item "NLS_LANG and NLS_NCHAR"
By default Ora2Pg will set \s-1NLS_LANG\s0 to \s-1AMERICAN_AMERICA.AL32UTF8\s0 and \s-1NLS_NCHAR\s0
to \s-1AL32UTF8.\s0 It is not recommended to change those settings but in some case it
could be useful. Using your own settings with those configuration directive will
change the client encoding at Oracle side by setting the environment variables
\&\f(CW$ENV\fR{\s-1NLS_LANG\s0} and \f(CW$ENV\fR{\s-1NLS_NCHAR\s0}.
.IP "\s-1BINMODE\s0" 4
.IX Item "BINMODE"
By default Ora2Pg will force Perl to use utf8 I/O encoding. This is done through
a call to the Perl pragma:
.Sp
.Vb 1
\&        use open \*(Aq:utf8\*(Aq;
.Ve
.Sp
You can override this encoding by using the \s-1BINMODE\s0 directive, for example you
can set it to :locale to use your locale or iso\-8859\-7, it will respectively use
.Sp
.Vb 2
\&        use open \*(Aq:locale\*(Aq;
\&        use open \*(Aq:encoding(iso\-8859\-7)\*(Aq;
.Ve
.Sp
If you have change the \s-1NLS_LANG\s0 in non \s-1UTF8\s0 encoding, you might want to set this
directive. See http://perldoc.perl.org/5.14.2/open.html for more information.
Most of the time, leave this directive commented.
.IP "\s-1CLIENT_ENCODING\s0" 4
.IX Item "CLIENT_ENCODING"
By default PostgreSQL client encoding is automatically set to \s-1UTF8\s0 to avoid
encoding issue. If you have changed the value of \s-1NLS_LANG\s0 you might have to
change the encoding of the PostgreSQL client.
.Sp
You can take a look at the PostgreSQL supported character sets here: http://www.postgresql.org/docs/9.0/static/multibyte.html
.IP "\s-1FORCE_PLSQL_ENCODING\s0" 4
.IX Item "FORCE_PLSQL_ENCODING"
To force utf8 encoding of the \s-1PL/SQL\s0 code exported, enable this directive.
Could be helpful in some rare condition.
.SS "\s-1PLSQL\s0 to \s-1PLPGSQL\s0 conversion"
.IX Subsection "PLSQL to PLPGSQL conversion"
Automatic code conversion from Oracle \s-1PLSQL\s0 to PostgreSQL \s-1PLPGSQL\s0 is a work in
progress in Ora2Pg and surely you will always have manual work. The Perl code
used for automatic conversion is all stored in a specific Perl Module named
Ora2Pg/PLSQL.pm feel free to modify/add you own code and send me patches. The
main work in on function, procedure, package and package body headers and
parameters rewrite.
.IP "\s-1PLSQL_PGSQL\s0" 4
.IX Item "PLSQL_PGSQL"
Enable/disable \s-1PLSQL\s0 to \s-1PLPGSQL\s0 conversion. Enabled by default.
.IP "\s-1NULL_EQUAL_EMPTY\s0" 4
.IX Item "NULL_EQUAL_EMPTY"
Ora2Pg can replace all conditions with a test on \s-1NULL\s0 by a call to the
\&\fBcoalesce()\fR function to mimic the Oracle behavior where empty string are
considered equal to \s-1NULL.\s0
.Sp
.Vb 2
\&        (field1 IS NULL) is replaced by (coalesce(field1::text, \*(Aq\*(Aq) = \*(Aq\*(Aq)
\&        (field2 IS NOT NULL) is replaced by (field2 IS NOT NULL AND field2::text <> \*(Aq\*(Aq)
.Ve
.Sp
You might want this replacement to be sure that your application will have the
same behavior but if you have control on you application a better way is to
change it to transform empty string into \s-1NULL\s0 because PostgreSQL makes the
difference.
.IP "\s-1EMPTY_LOB_NULL\s0" 4
.IX Item "EMPTY_LOB_NULL"
Force \fBempty_clob()\fR and \fBempty_blob()\fR to be exported as \s-1NULL\s0 instead as empty
string for the first one and '\ex' for the second. If \s-1NULL\s0 is allowed in your
column this might improve data export speed if you have lot of empty lob.
Default is to preserve the exact data from Oracle.
.IP "\s-1PACKAGE_AS_SCHEMA\s0" 4
.IX Item "PACKAGE_AS_SCHEMA"
If you don't want to export package as schema but as simple functions you
might also want to replace all call to package_name.function_name. If you
disable the \s-1PACKAGE_AS_SCHEMA\s0 directive then Ora2Pg will replace all call
to package_name.\fBfunction_name()\fR by \fBpackage_name_function_name()\fR. Default
is to use a schema to emulate package.
.Sp
The replacement will be done in all kind of \s-1DDL\s0 or code that is parsed by
the \s-1PLSQL\s0 to \s-1PLPGSQL\s0 converter. \s-1PLSQL_PGSQL\s0 must be enabled or \-p used in
command line.
.IP "\s-1REWRITE_OUTER_JOIN\s0" 4
.IX Item "REWRITE_OUTER_JOIN"
Enable this directive if the rewrite of Oracle native syntax (+) of
\&\s-1OUTER JOIN\s0 is broken. This will force Ora2Pg to not rewrite such code,
default is to try to rewrite simple form of right outer join for the
moment.
.IP "\s-1UUID_FUNCTION\s0" 4
.IX Item "UUID_FUNCTION"
By default Ora2Pg will convert call to \s-1\fBSYS_GUID\s0()\fR Oracle function
with a call to uuid_generate_v4 from uuid-ossp extension. You can
redefined it to use the gen_random_uuid function from pgcrypto
extension by changing the function name. Default to uuid_generate_v4.
.Sp
Note that when a \s-1RAW\s0(16) and \s-1RAW\s0(32) columns is found or that the \s-1RAW\s0 column
has \*(L"\s-1\fBSYS_GUID\s0()\fR\*(R" as default value Ora2Pg will automatically translate the
type of the column into uuid which might be the right translation in most of
the case. In this case data will be automatically migrated as PostgreSQL uuid
data type provided by the \*(L"uuid-ossp\*(R" extension.
.IP "\s-1FUNCTION_STABLE\s0" 4
.IX Item "FUNCTION_STABLE"
By default Oracle functions are marked as \s-1STABLE\s0 as they can not modify data
unless when used in \s-1PL/SQL\s0 with variable assignment or as conditional
expression. You can force Ora2Pg to create these function as \s-1VOLATILE\s0 by
disabling this configuration directive.
.IP "\s-1COMMENT_COMMIT_ROLLBACK\s0" 4
.IX Item "COMMENT_COMMIT_ROLLBACK"
By default call to \s-1COMMIT/ROLLBACK\s0 are kept untouched by Ora2Pg to force
the user to review the logic of the function. Once it is fixed in Oracle
source code or you want to comment this calls enable the following directive.
.IP "\s-1COMMENT_SAVEPOINT\s0" 4
.IX Item "COMMENT_SAVEPOINT"
It is common to see \s-1SAVEPOINT\s0 call inside \s-1PL/SQL\s0 procedure together with
a \s-1ROLLBACK TO\s0 savepoint_name. When \s-1COMMENT_COMMIT_ROLLBACK\s0 is enabled you
may want to also comment \s-1SAVEPOINT\s0 calls, in this case enable it.
.IP "\s-1STRING_CONSTANT_REGEXP\s0" 4
.IX Item "STRING_CONSTANT_REGEXP"
Ora2Pg replace all string constant during the pl/sql to plpgsql translation,
string constant are all text include between single quote. If you have some
string placeholder used in dynamic call to queries you can set a list of
regexp to be temporary replaced to not break the parser. For example:
.Sp
.Vb 1
\&        STRING_CONSTANT_REGEXP         <placeholder value=".*">
.Ve
.Sp
The list of regexp must use the semi colon as separator.
.IP "\s-1ALTERNATIVE_QUOTING_REGEXP\s0" 4
.IX Item "ALTERNATIVE_QUOTING_REGEXP"
To support the Alternative Quoting Mechanism ('Q' or 'q') for String Literals
set the regexp with the text capture to use to extract the text part. For
example with a variable declared as
.Sp
.Vb 1
\&        c_sample VARCHAR2(100 CHAR) := q\*(Aq{This doesn\*(Aqt work.}\*(Aq;
.Ve
.Sp
the regexp to use must be:
.Sp
.Vb 1
\&        ALTERNATIVE_QUOTING_REGEXP     q\*(Aq{(.*)}\*(Aq
.Ve
.Sp
ora2pg will use the $$ delimiter, with the example the result will be:
.Sp
.Vb 1
\&        c_sample varchar(100) := $$This doesn\*(Aqt work.$$;
.Ve
.Sp
The value of this configuration directive can be a list of regexp
separated by a semi colon. The capture part (between parenthesis) is
mandatory in each regexp if you want to restore the string constant.
.IP "\s-1USE_ORAFCE\s0" 4
.IX Item "USE_ORAFCE"
If you want to use functions defined in the Orafce library and prevent
Ora2Pg to translate call to these functions, enable this directive.
The Orafce library can be found here: https://github.com/orafce/orafce
.Sp
By default Ora2pg rewrite \fBadd_month()\fR, \fBadd_year()\fR, \fBdate_trunc()\fR and
\&\fBto_char()\fR functions, but you may prefer to use the orafce version of
these function that do not need any code transformation.
.IP "\s-1AUTONOMOUS_TRANSACTION\s0" 4
.IX Item "AUTONOMOUS_TRANSACTION"
Enable translation of autonomous transactions into a wrapper function
using dblink or pg_background extension. If you don't want to use this
translation and just want the function to be exported as a normal one
without the pragma call, disable this directive.
.SS "Materialized view"
.IX Subsection "Materialized view"
Materialized views are exported as snapshot \*(L"Snapshot Materialized Views\*(R" as
PostgreSQL only supports full refresh.
.PP
If you want to import the materialized views in PostgreSQL prior to 9.3 you
have to set configuration directive \s-1PG_SUPPORTS_MVIEW\s0 to 0. In this case
Ora2Pg will export all materialized views as explain in this document:
.PP
.Vb 1
\&        http://tech.jonathangardner.net/wiki/PostgreSQL/Materialized_Views.
.Ve
.PP
When exporting materialized view Ora2Pg will first add the \s-1SQL\s0 code to create the \*(L"materialized_views\*(R" table:
.PP
.Vb 6
\&        CREATE TABLE materialized_views (
\&                mview_name text NOT NULL PRIMARY KEY,
\&                view_name text NOT NULL,
\&                iname text,
\&                last_refresh TIMESTAMP WITH TIME ZONE
\&        );
.Ve
.PP
all materialized views will have an entry in this table. It then adds the
plpgsql code to create tree functions:
.PP
.Vb 3
\&        create_materialized_view(text, text, text) used to create a materialized view
\&        drop_materialized_view(text) used to delete a materialized view
\&        refresh_full_materialized_view(text) used to refresh a view
.Ve
.PP
then it adds the \s-1SQL\s0 code to create the view and the materialized view:
.PP
.Vb 2
\&        CREATE VIEW mviewname_mview AS
\&        SELECT ... FROM ...;
\&
\&        SELECT create_materialized_view(\*(Aqmviewname\*(Aq,\*(Aqmviewname_mview\*(Aq, change with the name of the column to used for the index);
.Ve
.PP
The first argument is the name of the materialized view, the second the name of
the view on which the materialized view is based and the third is the column
name on which the index should be build (aka most of the time the primary key).
This column is not automatically deduced so you need to replace its name.
.PP
As said above Ora2Pg only supports snapshot materialized views so the table will
be entirely refreshed by issuing first a truncate of the table and then by load
again all data from the view:
.PP
.Vb 1
\&         refresh_full_materialized_view(\*(Aqmviewname\*(Aq);
.Ve
.PP
To drop the materialized view you just have to call the \fBdrop_materialized_view()\fR
function with the name of the materialized view as parameter.
.SS "Other configuration directives"
.IX Subsection "Other configuration directives"
.IP "\s-1DEBUG\s0" 4
.IX Item "DEBUG"
Set it to 1 will enable verbose output.
.IP "\s-1IMPORT\s0" 4
.IX Item "IMPORT"
You can define common Ora2Pg configuration directives into a single file that
can be imported into other configuration files with the \s-1IMPORT\s0 configuration
directive as follow:
.Sp
.Vb 1
\&        IMPORT  commonfile.conf
.Ve
.Sp
will import all configuration directives defined into commonfile.conf into the
current configuration file.
.SS "Exporting views as PostgreSQL tables"
.IX Subsection "Exporting views as PostgreSQL tables"
You can export any Oracle view as a PostgreSQL table simply by setting \s-1TYPE\s0
configuration option to \s-1TABLE\s0 to have the corresponding create table statement.
Or use type \s-1COPY\s0 or \s-1INSERT\s0 to export the corresponding data. To allow that you
have to specify your views in the \s-1VIEW_AS_TABLE\s0 configuration option.
.PP
Then if Ora2Pg finds the view it will extract its schema (if TYPE=TABLE) into
a \s-1PG\s0 create table form, then it will extract the data (if TYPE=COPY or \s-1INSERT\s0)
following the view schema.
.PP
For example, with the following view:
.PP
.Vb 6
\&        CREATE OR REPLACE VIEW product_prices (category_id, product_count, low_price, high_price) AS
\&        SELECT  category_id, COUNT(*) as product_count,
\&            MIN(list_price) as low_price,
\&            MAX(list_price) as high_price
\&         FROM   product_information
\&        GROUP BY category_id;
.Ve
.PP
Setting \s-1VIEW_AS_TABLE\s0 to product_prices and using export type \s-1TABLE,\s0 will
force Ora2Pg to detect columns returned types and to generate a create table
statement:
.PP
.Vb 6
\&        CREATE TABLE product_prices (
\&                category_id bigint,
\&                product_count integer,
\&                low_price numeric,
\&                high_price numeric
\&        );
.Ve
.PP
Data will be loaded following the \s-1COPY\s0 or \s-1INSERT\s0 export type and the view
declaration.
.PP
You can use the \s-1ALLOW\s0 and \s-1EXCLUDE\s0 directive in addition to filter other
objects to export.
.SS "Export as Kettle transformation \s-1XML\s0 files"
.IX Subsection "Export as Kettle transformation XML files"
The \s-1KETTLE\s0 export type is useful if you want to use Penthalo Data Integrator
(Kettle) to import data to PostgreSQL. With this type of export Ora2Pg will
generate one \s-1XML\s0 Kettle transformation files (.ktr) per table and add a line
to manually execute the transformation in the output.sql file. For example:
.PP
.Vb 1
\&        ora2pg \-c ora2pg.conf \-t KETTLE \-j 12 \-a MYTABLE \-o load_mydata.sh
.Ve
.PP
will generate one file called '\s-1HR.MYTABLE\s0.ktr' and add a line to the output
file (load_mydata.sh):
.PP
.Vb 1
\&        #!/bin/sh
\&
\&        KETTLE_TEMPLATE_PATH=\*(Aq.\*(Aq
\&
\&        JAVAMAXMEM=4096 ./pan.sh \-file $KETTLE_TEMPLATE_PATH/HR.MYTABLE.ktr \-level Detailed
.Ve
.PP
The \-j 12 option will create a template with 12 processes to insert data into
PostgreSQL. It is also possible to specify the number of parallel queries used
to extract data from the Oracle with the \-J command line option as follow:
.PP
.Vb 1
\&        ora2pg \-c ora2pg.conf \-t KETTLE \-J 4 \-j 12 \-a EMPLOYEES \-o load_mydata.sh
.Ve
.PP
This is only possible if there is a unique key defined on a numeric column or
that you have defined the technical key to used to split the query between cores
in the \s-1DEFINED_PKEY\s0 configuration directive. For example:
.PP
.Vb 1
\&        DEFINED_PK      EMPLOYEES:employee_id
.Ve
.PP
will force the number of Oracle connection copies to 4 and defined the \s-1SQL\s0 query
as follow in the Kettle \s-1XML\s0 transformation file:
.PP
.Vb 1
\&        <sql>SELECT * FROM HR.EMPLOYEES WHERE ABS(MOD(employee_id,${Internal.Step.Unique.Count}))=${Internal.Step.Unique.Number}</sql>
.Ve
.PP
The \s-1KETTLE\s0 export type requires that the Oracle and PostgreSQL \s-1DSN\s0 are defined.
You can also activate the \s-1TRUNCATE_TABLE\s0 directive to force a truncation of the
table before data import.
.PP
The \s-1KETTLE\s0 export type is an original work of Marc Cousin.
.SS "Migration cost assessment"
.IX Subsection "Migration cost assessment"
Estimating the cost of a migration process from Oracle to PostgreSQL is not easy. To
obtain a good assessment of this migration cost, Ora2Pg will inspect all database
objects, all functions and stored procedures to detect if there's still some objects
and \s-1PL/SQL\s0 code that can not be automatically converted by Ora2Pg.
.PP
Ora2Pg has a content analysis mode that inspect the Oracle database to generate a
text report on what the Oracle database contains and what can not be exported.
.PP
To activate the \*(L"analysis and report\*(R" mode, you have to use the export de type
\&\s-1SHOW_REPORT\s0 like in the following command:
.PP
.Vb 1
\&        ora2pg \-t SHOW_REPORT
.Ve
.PP
Here is a sample report obtained with this command:
.PP
.Vb 6
\&        \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
\&        Ora2Pg: Oracle Database Content Report
\&        \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
\&        Version Oracle Database 10g Enterprise Edition Release 10.2.0.1.0
\&        Schema  HR
\&        Size  880.00 MB
\&         
\&        \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
\&        Object  Number  Invalid Comments
\&        \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
\&        CLUSTER   2 0 Clusters are not supported and will not be exported.
\&        FUNCTION  40  0 Total size of function code: 81992.
\&        INDEX     435 0 232 index(es) are concerned by the export, others are automatically generated and will
\&                                        do so on PostgreSQL. 1 bitmap index(es). 230 b\-tree index(es). 1 reversed b\-tree index(es)
\&                                        Note that bitmap index(es) will be exported as b\-tree index(es) if any. Cluster, domain,
\&                                        bitmap join and IOT indexes will not be exported at all. Reverse indexes are not exported
\&                                        too, you may use a trigram\-based index (see pg_trgm) or a reverse() function based index
\&                                        and search. You may also use \*(Aqvarchar_pattern_ops\*(Aq, \*(Aqtext_pattern_ops\*(Aq or \*(Aqbpchar_pattern_ops\*(Aq
\&                                        operators in your indexes to improve search with the LIKE operator respectively into
\&                                        varchar, text or char columns.
\&        MATERIALIZED VIEW 1 0 All materialized view will be exported as snapshot materialized views, they
\&                                        are only updated when fully refreshed.
\&        PACKAGE BODY  2 1 Total size of package code: 20700.
\&        PROCEDURE 7 0 Total size of procedure code: 19198.
\&        SEQUENCE  160 0 Sequences are fully supported, but all call to sequence_name.NEXTVAL or sequence_name.CURRVAL
\&                                        will be transformed into NEXTVAL(\*(Aqsequence_name\*(Aq) or CURRVAL(\*(Aqsequence_name\*(Aq).
\&        TABLE     265 0 1 external table(s) will be exported as standard table. See EXTERNAL_TO_FDW configuration
\&                                        directive to export as file_fdw foreign tables or use COPY in your code if you just
\&                                        want to load data from external files. 2 binary columns. 4 unknown types.
\&        TABLE PARTITION 8 0 Partitions are exported using table inheritance and check constraint. 1 HASH partitions.
\&                                        2 LIST partitions. 6 RANGE partitions. Note that Hash partitions are not supported.
\&        TRIGGER   30  0 Total size of trigger code: 21677.
\&        TYPE      7 1 5 type(s) are concerned by the export, others are not supported. 2 Nested Tables.
\&                                        2 Object type. 1 Subtype. 1 Type Boby. 1 Type inherited. 1 Varrays. Note that Type
\&                                        inherited and Subtype are converted as table, type inheritance is not supported.
\&        TYPE BODY 0 3 Export of type with member method are not supported, they will not be exported.
\&        VIEW      7 0 Views are fully supported, but if you have updatable views you will need to use
\&                                        INSTEAD OF triggers.
\&        DATABASE LINK 1 0 Database links will not be exported. You may try the dblink perl contrib module or use
\&                                        the SQL/MED PostgreSQL features with the different Foreign Data Wrapper (FDW) extensions.
\&                                        
\&        Note: Invalid code will not be exported unless the EXPORT_INVALID configuration directive is activated.
.Ve
.PP
Once the database can be analysed, Ora2Pg, by his ability to convert \s-1SQL\s0 and \s-1PL/SQL\s0
code from Oracle syntax to PostgreSQL, can go further by estimating the code difficulties
and estimate the time necessary to operate a full database migration.
.PP
To estimate the migration cost in man-days, Ora2Pg allow you to use a configuration
directive called \s-1ESTIMATE_COST\s0 that you can also enabled at command line:
.PP
.Vb 1
\&        \-\-estimate_cost
.Ve
.PP
This feature can only be used with the \s-1SHOW_REPORT, FUNCTION, PROCEDURE, PACKAGE\s0
and \s-1QUERY\s0 export type.
.PP
.Vb 1
\&        ora2pg \-t SHOW_REPORT  \-\-estimate_cost
.Ve
.PP
The generated report is same as above but with a new 'Estimated cost' column as follow:
.PP
.Vb 6
\&        \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
\&        Ora2Pg: Oracle Database Content Report
\&        \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
\&        Version Oracle Database 10g Express Edition Release 10.2.0.1.0
\&        Schema  HR
\&        Size  890.00 MB
\&         
\&        \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
\&        Object  Number  Invalid Estimated cost  Comments
\&        \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
\&        DATABASE LINK  3 0 9 Database links will be exported as SQL/MED PostgreSQL\*(Aqs Foreign Data Wrapper (FDW) extensions
\&                                        using oracle_fdw.
\&        FUNCTION  2 0 7 Total size of function code: 369 bytes. HIGH_SALARY: 2, VALIDATE_SSN: 3.
\&        INDEX 21  0 11  11 index(es) are concerned by the export, others are automatically generated and will do so
\&                                        on PostgreSQL. 11 b\-tree index(es). Note that bitmap index(es) will be exported as b\-tree
\&                                        index(es) if any. Cluster, domain, bitmap join and IOT indexes will not be exported at all.
\&                                        Reverse indexes are not exported too, you may use a trigram\-based index (see pg_trgm) or a
\&                                        reverse() function based index and search. You may also use \*(Aqvarchar_pattern_ops\*(Aq, \*(Aqtext_pattern_ops\*(Aq
\&                                        or \*(Aqbpchar_pattern_ops\*(Aq operators in your indexes to improve search with the LIKE operator
\&                                        respectively into varchar, text or char columns.
\&        JOB 0 0 0 Job are not exported. You may set external cron job with them.
\&        MATERIALIZED VIEW 1 0 3 All materialized view will be exported as snapshot materialized views, they
\&                                                are only updated when fully refreshed.
\&        PACKAGE BODY  0 2 54  Total size of package code: 2487 bytes. Number of procedures and functions found
\&                                                inside those packages: 7. two_proc.get_table: 10, emp_mgmt.create_dept: 4,
\&                                                emp_mgmt.hire: 13, emp_mgmt.increase_comm: 4, emp_mgmt.increase_sal: 4,
\&                                                emp_mgmt.remove_dept: 3, emp_mgmt.remove_emp: 2.
\&        PROCEDURE 4 0 39  Total size of procedure code: 2436 bytes. TEST_COMMENTAIRE: 2, SECURE_DML: 3,
\&                                                PHD_GET_TABLE: 24, ADD_JOB_HISTORY: 6.
\&        SEQUENCE  3 0 0 Sequences are fully supported, but all call to sequence_name.NEXTVAL or sequence_name.CURRVAL
\&                                                will be transformed into NEXTVAL(\*(Aqsequence_name\*(Aq) or CURRVAL(\*(Aqsequence_name\*(Aq).
\&        SYNONYM   3 0 4 SYNONYMs will be exported as views. SYNONYMs do not exists with PostgreSQL but a common workaround
\&                                                is to use views or set the PostgreSQL search_path in your session to access
\&                                                object outside the current schema.
\&                                                user1.emp_details_view_v is an alias to hr.emp_details_view.
\&                                                user1.emp_table is an alias to hr.employees@other_server.
\&                                                user1.offices is an alias to hr.locations.
\&        TABLE 17  0 8.5 1 external table(s) will be exported as standard table. See EXTERNAL_TO_FDW configuration
\&                                        directive to export as file_fdw foreign tables or use COPY in your code if you just want to
\&                                        load data from external files. 2 binary columns. 4 unknown types.
\&        TRIGGER 1 1 4 Total size of trigger code: 123 bytes. UPDATE_JOB_HISTORY: 2.
\&        TYPE  7 1 5 5 type(s) are concerned by the export, others are not supported. 2 Nested Tables. 2 Object type.
\&                                        1 Subtype. 1 Type Boby. 1 Type inherited. 1 Varrays. Note that Type inherited and Subtype are
\&                                        converted as table, type inheritance is not supported.
\&        TYPE BODY 0 3 30  Export of type with member method are not supported, they will not be exported.
\&        VIEW  1 1 1 Views are fully supported, but if you have updatable views you will need to use INSTEAD OF triggers.
\&        \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
\&        Total 65  8 162.5 162.5 cost migration units means approximatively 2 man day(s).
.Ve
.PP
The last line shows the total estimated migration code in man-days following the
number of migration units estimated for each object. This migration unit represent
around five minutes for a PostgreSQL expert. If this is your first migration you can
get it higher with the configuration directive \s-1COST_UNIT_VALUE\s0 or the \-\-cost_unit_value
command line option:
.PP
.Vb 1
\&        ora2pg \-t SHOW_REPORT  \-\-estimate_cost \-\-cost_unit_value 10
.Ve
.PP
Ora2Pg is also able to give you a migration difficulty level assessment, here a sample:
.PP
Migration level: B\-5
.PP
.Vb 10
\&    Migration levels:
\&        A \- Migration that might be run automatically
\&        B \- Migration with code rewrite and a human\-days cost up to 5 days
\&        C \- Migration with code rewrite and a human\-days cost above 5 days
\&    Technical levels:
\&        1 = trivial: no stored functions and no triggers
\&        2 = easy: no stored functions but with triggers, no manual rewriting
\&        3 = simple: stored functions and/or triggers, no manual rewriting
\&        4 = manual: no stored functions but with triggers or views with code rewriting
\&        5 = difficult: stored functions and/or triggers with code rewriting
.Ve
.PP
This assessment consist in a letter A or B to specify if the migration needs
manual rewriting or not. And a number from 1 up to 5 to give you a technical
difficulty level. You have an additional option \-\-human_days_limit to specify
the number of human-days limit where the migration level should be set to C
to indicate that it need a huge amount of work and a full project management
with migration support. Default is 10 human-days. You can use the configuration
directive \s-1HUMAN_DAYS_LIMIT\s0 to change this default value permanently.
.PP
This feature has been developed to help you or your boss to decide which
database to migrate first and the team that must be mobilized to operate
the migration.
.SS "Global Oracle and MySQL migration assessment"
.IX Subsection "Global Oracle and MySQL migration assessment"
Ora2Pg come with a script ora2pg_scanner that can be used when you have a huge
number of instances and schema to scan for migration assessment.
.PP
Usage: ora2pg_scanner \-l \s-1CSVFILE\s0 [\-o \s-1OUTDIR\s0]
.PP
.Vb 8
\&   \-b | \-\-binpath DIR: full path to directory where the ora2pg binary stays.
\&                Might be useful only on Windows OS.
\&   \-c | \-\-config FILE: set custom configuration file to use otherwise ora2pg
\&                will use the default: /etc/ora2pg/ora2pg.conf.
\&   \-l | \-\-list FILE : CSV file containing a list of databases to scan with
\&                all required information. The first line of the file
\&                can contain the following header that describes the
\&                format that must be used:
\&
\&                "type","schema/database","dsn","user","password"
\&
\&   \-o | \-\-outdir DIR : (optional) by default all reports will be dumped to a
\&                directory named \*(Aqoutput\*(Aq, it will be created automatically.
\&                If you want to change the name of this directory, set the name
\&                at second argument.
\&
\&   \-t | \-\-test : just try all connections by retrieving the required schema
\&                 or database name. Useful to validate your CSV list file.
\&   \-u | \-\-unit MIN : redefine globally the migration cost unit value in minutes.
\&                 Default is taken from the ora2pg.conf (default 5 minutes).
\&
\&   Here is a full example of a CSV databases list file:
\&
\&        "type","schema/database","dsn","user","password"
\&        "MYSQL","sakila","dbi:mysql:host=192.168.1.10;database=sakila;port=3306","root","secret"
\&        "ORACLE","HR","dbi:Oracle:host=192.168.1.10;sid=XE;port=1521","system","manager"
\&        "MSSQL","HR","dbi:ODBC:driver=msodbcsql18;server=srv.database.windows.net;database=testdb","system","manager"
\&
\&   The CSV field separator must be a comma.
\&
\&   Note that if you want to scan all schemas from an Oracle instance you just
\&   have to leave the schema field empty, Ora2Pg will automatically detect all
\&   available schemas and generate a report for each one. Of course you need to
\&   use a connection user with enough privileges to be able to scan all schemas.
\&   For example:
\&
\&        "ORACLE","","dbi:Oracle:host=192.168.1.10;sid=XE;port=1521","system","manager"
\&        "MSSQL","","dbi:ODBC:driver=msodbcsql18;server=srv.database.windows.net;database=testdb","usrname","passwd"
\&
\&   will generate a report for all schema in the XE instance. Note that in this
\&   case the SCHEMA directive in ora2pg.conf must not be set.
.Ve
.PP
It will generate a \s-1CSV\s0 file with the assessment result, one line per schema or
database and a detailed \s-1HTML\s0 report for each database scanned.
.PP
Hint: Use the \-t | \-\-test option before to test all your connections in your
\&\s-1CSV\s0 file.
.PP
For Windows users you must use the \-b command line option to set the directory
where ora2pg_scanner stays otherwise the ora2pg command calls will fail.
.PP
In the migration assessment details about functions Ora2Pg always include per
default 2 migration units for \s-1TEST\s0 and 1 unit for \s-1SIZE\s0 per 1000 characters in
the code. This mean that by default it will add 15 minutes in the migration
assessment per function. Obviously if you have unitary tests or very simple
functions this will not represent the real migration time.
.SS "Migration assessment method"
.IX Subsection "Migration assessment method"
Migration unit scores given to each type of Oracle database object are defined in the
Perl library lib/Ora2Pg/PLSQL.pm in the \f(CW%OBJECT_SCORE\fR variable definition.
.PP
The number of \s-1PL/SQL\s0 lines associated to a migration unit is also defined in this file
in the \f(CW$SIZE_SCORE\fR variable value.
.PP
The number of migration units associated to each \s-1PL/SQL\s0 code difficulties can be found
in the same Perl library lib/Ora2Pg/PLSQL.pm in the hash \f(CW%UNCOVERED_SCORE\fR initialization.
.PP
This assessment method is a work in progress so I'm expecting feedbacks on migration
experiences to polish the scores/units attributed in those variables.
.SS "Improving indexes and constraints creation speed"
.IX Subsection "Improving indexes and constraints creation speed"
Using the \s-1LOAD\s0 export type and a file containing \s-1SQL\s0 orders to perform, it is
possible to dispatch those orders over multiple PostgreSQL connections. To be
able to use this feature, the \s-1PG_DSN, PG_USER\s0 and \s-1PG_PWD\s0 must be set. Then:
.PP
.Vb 1
\&        ora2pg \-t LOAD \-c config/ora2pg.conf \-i schema/tables/INDEXES_table.sql \-j 4
.Ve
.PP
will dispatch indexes creation over 4 simultaneous PostgreSQL connections.
.PP
This will considerably accelerate this part of the migration process with huge
data size.
.SS "Exporting \s-1LONG RAW\s0"
.IX Subsection "Exporting LONG RAW"
If you still have columns defined as \s-1LONG RAW,\s0 Ora2Pg will not be able to export
these kind of data. The \s-1OCI\s0 library fail to export them and always return the
same first record. To be able to export the data you need to transform the field
as \s-1BLOB\s0 by creating a temporary table before migrating data. For example, the
Oracle table:
.PP
.Vb 5
\&        SQL> DESC TEST_LONGRAW
\&         Name                 NULL ?   Type
\&         \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
\&         ID                            NUMBER
\&         C1                            LONG RAW
.Ve
.PP
need to be \*(L"translated\*(R" into a table using \s-1BLOB\s0 as follow:
.PP
.Vb 1
\&        CREATE TABLE test_blob (id NUMBER, c1 BLOB);
.Ve
.PP
And then copy the data with the following \s-1INSERT\s0 query:
.PP
.Vb 1
\&        INSERT INTO test_blob SELECT id, to_lob(c1) FROM test_longraw;
.Ve
.PP
Then you just have to exclude the original table from the export (see \s-1EXCLUDE\s0
directive) and to renamed the new temporary table on the fly using the
\&\s-1REPLACE_TABLES\s0 configuration directive.
.SS "Global variables"
.IX Subsection "Global variables"
Oracle allow the use of global variables defined in packages. Ora2Pg will
export these variables for PostgreSQL as user defined custom variables
available in a session. Oracle variables assignment are exported as
call to:
.PP
.Vb 1
\&    PERFORM set_config(\*(Aqpkgname.varname\*(Aq, value, false);
.Ve
.PP
Use of these variables in the code is replaced by:
.PP
.Vb 1
\&    current_setting(\*(Aqpkgname.varname\*(Aq)::global_variables_type;
.Ve
.PP
where global_variables_type is the type of the variable extracted from
the package definition.
.PP
If the variable is a constant or have a default value assigned at
declaration, Ora2Pg will create a file global_variables.conf with
the definition to include in the postgresql.conf file so that their
values will already be set at database connection. Note that the
value can always modified by the user so you can not have exactly
a constant.
.SS "Hints"
.IX Subsection "Hints"
Converting your queries with Oracle style outer join (+) syntax to \s-1ANSI\s0 standard \s-1SQL\s0 at
the Oracle side can save you lot of time for the migration. You can use \s-1TOAD\s0 Query Builder
can re-write these using the proper \s-1ANSI\s0 syntax, see: http://www.toadworld.com/products/toad\-for\-oracle/f/10/t/9518.aspx
.PP
There's also an alternative with \s-1SQL\s0 Developer Data Modeler, see
http://www.thatjeffsmith.com/archive/2012/01/sql\-developer\-data\-modeler\-quick\-tip\-use\-oracle\-join\-syntax\-or\-ansi/
.PP
Toad is also able to rewrite the native Oracle \s-1\fBDECODE\s0()\fR syntax into \s-1ANSI\s0
standard \s-1SQL CASE\s0 statement. You can find some slide about this in a
presentation given at PgConf.RU: http://ora2pg.darold.net/slides/ora2pg_the_hard_way.pdf
.SS "Test the migration"
.IX Subsection "Test the migration"
The type of action called \s-1TEST\s0 allow you to check that all objects from Oracle
database have been created under PostgreSQL. Of course \s-1PG_DSN\s0 must be set to be
able to check PostgreSQL side.
.PP
Note that this feature respect the schema name limitation if \s-1EXPORT_SCHEMA\s0 and
\&\s-1SCHEMA\s0 or \s-1PG_SCHEMA\s0 are defined. If only \s-1EXPORT_SCHEMA\s0 is set all schemes from
Oracle and PostgreSQL are scanned. You can filter to a single schema using
\&\s-1SCHEMA\s0 and/or \s-1PG_SCHEMA\s0 but you can not filter on a list of schema. To test
a list of schema you will have to repeat the calls to Ora2Pg by specifying
a single schema each time.
.PP
For example command:
.PP
.Vb 1
\&        ora2pg \-t TEST \-c config/ora2pg.conf > migration_diff.txt
.Ve
.PP
Will create a file containing the report of all object and row count on both
side, Oracle and PostgreSQL, with an error section giving you the detail of
the differences for each kind of object. Here is a sample result:
.PP
.Vb 7
\&        [TEST INDEXES COUNT]
\&        ORACLEDB:DEPARTMENTS:2
\&        POSTGRES:departments:1
\&        ORACLEDB:EMPLOYEES:6
\&        POSTGRES:employees:6
\&        [ERRORS INDEXES COUNT]
\&        Table departments doesn\*(Aqt have the same number of indexes in Oracle (2) and in PostgreSQL (1).
\&
\&        [TEST UNIQUE CONSTRAINTS COUNT]
\&        ORACLEDB:DEPARTMENTS:1
\&        POSTGRES:departments:1
\&        ORACLEDB:EMPLOYEES:1
\&        POSTGRES:employees:1
\&        [ERRORS UNIQUE CONSTRAINTS COUNT]
\&        OK, Oracle and PostgreSQL have the same number of unique constraints.
\&
\&        [TEST PRIMARY KEYS COUNT]
\&        ORACLEDB:DEPARTMENTS:1
\&        POSTGRES:departments:1
\&        ORACLEDB:EMPLOYEES:1
\&        POSTGRES:employees:1
\&        [ERRORS PRIMARY KEYS COUNT]
\&        OK, Oracle and PostgreSQL have the same number of primary keys.
\&
\&        [TEST CHECK CONSTRAINTS COUNT]
\&        ORACLEDB:DEPARTMENTS:1
\&        POSTGRES:departments:1
\&        ORACLEDB:EMPLOYEES:1
\&        POSTGRES:employees:1
\&        [ERRORS CHECK CONSTRAINTS COUNT]
\&        OK, Oracle and PostgreSQL have the same number of check constraints.
\&
\&        [TEST NOT NULL CONSTRAINTS COUNT]
\&        ORACLEDB:DEPARTMENTS:1
\&        POSTGRES:departments:1
\&        ORACLEDB:EMPLOYEES:1
\&        POSTGRES:employees:1
\&        [ERRORS NOT NULL CONSTRAINTS COUNT]
\&        OK, Oracle and PostgreSQL have the same number of not null constraints.
\&
\&        [TEST COLUMN DEFAULT VALUE COUNT]
\&        ORACLEDB:DEPARTMENTS:1
\&        POSTGRES:departments:1
\&        ORACLEDB:EMPLOYEES:1
\&        POSTGRES:employees:1
\&        [ERRORS COLUMN DEFAULT VALUE COUNT]
\&        OK, Oracle and PostgreSQL have the same number of column default value.
\&
\&        [TEST IDENTITY COLUMN COUNT]
\&        ORACLEDB:DEPARTMENTS:1
\&        POSTGRES:departments:1
\&        ORACLEDB:EMPLOYEES:0
\&        POSTGRES:employees:0
\&        [ERRORS IDENTITY COLUMN COUNT]
\&        OK, Oracle and PostgreSQL have the same number of identity column.
\&
\&        [TEST FOREIGN KEYS COUNT]
\&        ORACLEDB:DEPARTMENTS:0
\&        POSTGRES:departments:0
\&        ORACLEDB:EMPLOYEES:1
\&        POSTGRES:employees:1
\&        [ERRORS FOREIGN KEYS COUNT]
\&        OK, Oracle and PostgreSQL have the same number of foreign keys.
\&
\&        [TEST TABLE COUNT]
\&        ORACLEDB:TABLE:2
\&        POSTGRES:TABLE:2
\&        [ERRORS TABLE COUNT]
\&        OK, Oracle and PostgreSQL have the same number of TABLE.
\&
\&        [TEST TABLE TRIGGERS COUNT]
\&        ORACLEDB:DEPARTMENTS:0
\&        POSTGRES:departments:0
\&        ORACLEDB:EMPLOYEES:1
\&        POSTGRES:employees:1
\&        [ERRORS TABLE TRIGGERS COUNT]
\&        OK, Oracle and PostgreSQL have the same number of table triggers.
\&
\&        [TEST TRIGGER COUNT]
\&        ORACLEDB:TRIGGER:2
\&        POSTGRES:TRIGGER:2
\&        [ERRORS TRIGGER COUNT]
\&        OK, Oracle and PostgreSQL have the same number of TRIGGER.
\&
\&        [TEST VIEW COUNT]
\&        ORACLEDB:VIEW:1
\&        POSTGRES:VIEW:1
\&        [ERRORS VIEW COUNT]
\&        OK, Oracle and PostgreSQL have the same number of VIEW.
\&
\&        [TEST MVIEW COUNT]
\&        ORACLEDB:MVIEW:0
\&        POSTGRES:MVIEW:0
\&        [ERRORS MVIEW COUNT]
\&        OK, Oracle and PostgreSQL have the same number of MVIEW.
\&
\&        [TEST SEQUENCE COUNT]
\&        ORACLEDB:SEQUENCE:1
\&        POSTGRES:SEQUENCE:0
\&        [ERRORS SEQUENCE COUNT]
\&        SEQUENCE does not have the same count in Oracle (1) and in PostgreSQL (0).
\&
\&        [TEST TYPE COUNT]
\&        ORACLEDB:TYPE:1
\&        POSTGRES:TYPE:0
\&        [ERRORS TYPE COUNT]
\&        TYPE does not have the same count in Oracle (1) and in PostgreSQL (0).
\&
\&        [TEST FDW COUNT]
\&        ORACLEDB:FDW:0
\&        POSTGRES:FDW:0
\&        [ERRORS FDW COUNT]
\&        OK, Oracle and PostgreSQL have the same number of FDW.
\&
\&        [TEST FUNCTION COUNT]
\&        ORACLEDB:FUNCTION:3
\&        POSTGRES:FUNCTION:3
\&        [ERRORS FUNCTION COUNT]
\&        OK, Oracle and PostgreSQL have the same number of functions.
\&
\&        [TEST SEQUENCE VALUES]
\&        ORACLEDB:EMPLOYEES_NUM_SEQ:1285
\&        POSTGRES:employees_num_seq:1285
\&        [ERRORS SEQUENCE VALUES COUNT]
\&        OK, Oracle and PostgreSQL have the same values for sequences
\&
\&        [TEST ROWS COUNT]
\&        ORACLEDB:DEPARTMENTS:27
\&        POSTGRES:departments:27
\&        ORACLEDB:EMPLOYEES:854
\&        POSTGRES:employees:854
\&        [ERRORS ROWS COUNT]
\&        OK, Oracle and PostgreSQL have the same number of rows.
.Ve
.SS "Data validation"
.IX Subsection "Data validation"
Data validation consists in comparing data retrieved from a foreign table
pointing to the source Oracle table and a local PostgreSQL table resulting
from the data export.
.PP
To run data validation you can use a direct connection like any other Ora2Pg
action but you can also use the oracle_fdw, mysql_fdw ior tds_fdw extension
provided that \s-1FDW_SERVER\s0 and \s-1PG_DSN\s0 configuration directives are set.
.PP
By default Ora2Pg will extract the 10000 first rows from both side, you can
change this value using directive \s-1DATA_VALIDATION_ROWS.\s0 When it is set to
zero all rows of the tables will be compared.
.PP
Data validation requires that the table has a primary key or unique index
and that the key columns is not a \s-1LOB.\s0 Rows will be sorted using this unique
key. Due to differences in sort behavior between Oracle and PostgreSQL, if the
collation of unique key columns in PostgreSQL is not 'C', the sort order can
be different compared to Oracle. In this case the data validation will fail.
.PP
Data validation must be done before any data is modified.
.PP
Ora2Pg will stop comparing two tables after \s-1DATA_VALIDATION_ROWS\s0 is reached
or that 10 errors has been encountered, result is dumped in a file named
\&\*(L"data_validation.log\*(R" written in the current directory by default. The number
of error before stopping the diff between rows can be controlled using the
configuration directive \s-1DATA_VALIDATION_ERROR.\s0 All rows in errors are printed
to the output file for your analyze.
.PP
It is possible to parallelize data validation by using \-P option or the
corresponding configuration directive \s-1PARALLEL_TABLES\s0 in ora2pg.conf.
.SS "Use of System Change Number (\s-1SCN\s0)"
.IX Subsection "Use of System Change Number (SCN)"
Ora2Pg is able to export data as of a specific \s-1SCN.\s0 You can set it at command
line using the \-S or \-\-scn option. You can give a specific \s-1SCN\s0 or if you want
to use the current \s-1SCN\s0 at first connection time set the value to 'current'.
In this last case the connection user has the \*(L"\s-1SELECT ANY DICTIONARY\*(R"\s0 or the
\&\*(L"\s-1SELECT_CATALOG_ROLE\*(R"\s0 role, the current \s-1SCN\s0 is looked at the v$database view.
.PP
Example of use:
.PP
.Vb 1
\&    ora2pg \-c ora2pg.conf \-t COPY \-\-scn 16605281
.Ve
.PP
This adds the following clause to the query used to retrieve data for example:
.PP
.Vb 1
\&    AS OF SCN 16605281
.Ve
.PP
You can also use th \-\-scn option to use the Oracle flashback capabality by
specifying a timestamp expression instead of a \s-1SCN.\s0 For example:
.PP
.Vb 1
\&    ora2pg \-c ora2pg.conf \-t COPY \-\-scn "TO_TIMESTAMP(\*(Aq2021\-12\-01 00:00:00\*(Aq, \*(AqYYYY\-MM\-DD HH:MI:SS\*(Aq)"
.Ve
.PP
This will add the following clause to the query used to retrieve data:
.PP
.Vb 1
\&    AS OF TIMESTAMP TO_TIMESTAMP(\*(Aq2021\-12\-01 00:00:00\*(Aq, \*(AqYYYY\-MM\-DD HH:MI:SS\*(Aq)
.Ve
.PP
or for example to only retrive yesterday's data:
.PP
.Vb 1
\&    ora2pg \-c ora2pg.conf \-t COPY \-\-scn "SYSDATE \- 1"
.Ve
.SS "Change Data Capture (\s-1CDC\s0)"
.IX Subsection "Change Data Capture (CDC)"
Ora2Pg do not have such feature which allow to import data and to only apply
changes after the first import. But you can use the \-\-cdc_ready option to
export data with registration of the \s-1SCN\s0 at the time of the table export.
All \s-1SCN\s0 per tables are written to a file named \s-1TABLES_SCN\s0.log by default,
it can be changed using \-C | \-\-cdc_file option.
.PP
These \s-1SCN\s0 registered per table during \s-1COPY\s0 or \s-1INSERT\s0 export can be used with
a \s-1CDC\s0 tool. The format of the file is tablename:SCN per line.
.SS "Importing \s-1BLOB\s0 as large objects"
.IX Subsection "Importing BLOB as large objects"
By default Ora2Pg imports Oracle \s-1BLOB\s0 as bytea, the destination column is
created using the bytea data type. If you want to use large object instead
of bytea, just add the \-\-blob_to_lo option to the ora2pg command. It will
create the destination column as data type Oid and will save the \s-1BLOB\s0 as a
large object using the \fBlo_from_bytea()\fR function. The Oid returned by the
call to \fBlo_from_bytea()\fR is inserted in the destination column instead of
a bytea. Because of the use of the function this option can only be used
with actions \s-1SHOW_COLUMN, TABLE\s0 and \s-1INSERT.\s0 Action \s-1COPY\s0 is not allowed.
.PP
If you want to use \s-1COPY\s0 or have huge size \s-1BLOB\s0 ( > 1GB) than can not be
imported using \fBlo_from_bytea()\fR you can add option \-\-lo_import to the ora2pg
command. This will allow to import data in two passes.
.PP
1) Export data using \s-1COPY\s0 or \s-1INSERT\s0 will set the Oid destination column for
\&\s-1BLOB\s0 to value 0 and save the \s-1BLOB\s0 value into a dedicated file. It will also
create a Shell script to import the \s-1BLOB\s0 files into the database using psql
command \elo_import and to update the table Oid column to the returned large
object Oid. The script is named lo_import\-TABLENAME.sh
.PP
2) Execute all scripts lo_import\-TABLENAME.sh after setting the environment
variables \s-1PGDATABASE\s0 and optionally \s-1PGHOST, PGPORT, PGUSER,\s0 etc. if they do
not correspond to the default values for libpq.
.PP
You might also execute manually a \s-1VACUUM FULL\s0 on the table to remove the bloat
created by the table update.
.PP
Limitation: the table must have a primary key, it is used to set the \s-1WHERE\s0
clause to update the Oid column after the large object import. Importing \s-1BLOB\s0
using this second method (\-\-lo_import) is very slow so it should be reserved
to rows where the \s-1BLOB\s0 > 1GB for all other rows use the option \-\-blob_to_lo.
To filter the rows you can use the \s-1WHERE\s0 configuration directive in ora2pg.conf.
.SH "SUPPORT"
.IX Header "SUPPORT"
.SS "Author / Maintainer"
.IX Subsection "Author / Maintainer"
Gilles Darold <gilles \s-1AT\s0 darold \s-1DOT\s0 net>
.PP
Please report any bugs, patches, help, etc. to <gilles \s-1AT\s0 darold \s-1DOT\s0 net>.
.SS "Feature request"
.IX Subsection "Feature request"
If you need new features let me know at <gilles \s-1AT\s0 darold \s-1DOT\s0 net>. This help
a lot to develop a better/useful tool.
.SS "How to contribute ?"
.IX Subsection "How to contribute ?"
Any contribution to build a better tool is welcome, you just have to send me
your ideas, features request or patches and there will be applied.
.SH "LICENSE"
.IX Header "LICENSE"
Copyright (c) 2000\-2023 Gilles Darold \- All rights reserved.
.PP
.Vb 4
\&        This program is free software: you can redistribute it and/or modify
\&        it under the terms of the GNU General Public License as published by
\&        the Free Software Foundation, either version 3 of the License, or
\&        any later version.
\&
\&        This program is distributed in the hope that it will be useful,
\&        but WITHOUT ANY WARRANTY; without even the implied warranty of
\&        MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
\&        GNU General Public License for more details.
\&
\&        You should have received a copy of the GNU General Public License
\&        along with this program.  If not, see < http://www.gnu.org/licenses/ >.
.Ve
.SH "ACKNOWLEDGEMENT"
.IX Header "ACKNOWLEDGEMENT"
I must thanks a lot all the great contributors, see changelog for all acknowledgments.
