docs.oracle.com Open in urlscan Pro
2a02:26f0:1700:796::af5  Public Scan

Submitted URL: https://docs.oracle.com/en/middleware/goldengate/core/19.1/admin/tuning-performance-oracle-goldengate.html#GUID-66EAA8F3...
Effective URL: https://docs.oracle.com/en/middleware/goldengate/core/19.1/admin/tuning-performance-oracle-goldengate.html
Submission: On November 02 via api from DE — Scanned from DE

Form analysis 3 forms found in the DOM

Name: searchFormGET /apps/search/search.jsp?

<form class="u02searchform" name="searchForm" method="get" action="/apps/search/search.jsp?">
  <input aria-hidden="true" type="hidden" name="category" value="all">
  <input type="text" id="txtSearch" class="textcnt autoclear" name="q" placeholder="Search" aria-label="Search input text area" autocomplete="off" aria-owns="awesomplete_list_1">
  <input aria-label="submit search" class="u02searchbttn" type="submit" aria-hidden="true">
</form>

Name: searchFormGET /search/?

<form class="u02searchform" name="searchForm" method="GET" action="/search/?">
  <input id="search-bar-input" name="q" placeholder="Search" aria-label="Search input text area" autocomplete="off" class="background-white black-placeholder">
  <input type="hidden" name="book" value="ADMIN"><input type="hidden" name="library" value="en/middleware/goldengate/core/19.1">
</form>

GET /apps/search/search.jsp

<form id="search-form" action="/apps/search/search.jsp" method="GET" target="_blank">
  <div class="form-group">
    <div class="input-group">
      <input type="text" name="q" class="form-control" id="search-query" aria-label="Search Query Field" placeholder="Search for...">
      <span class="input-group-btn">
        <button type="submit" class="btn btn-primary" aria-label="Submit Search Query">
          <span class="fa fa-search" aria-hidden="true"></span>
          <span class="sr-only">Search</span>
        </button>
      </span>
    </div>
  </div>
  <div class="radio">
    <input type="radio" name="search-scope" id="search-scope-book" value="book">
    <label for="search-scope-book">Search this book</label>
  </div>
  <div class="radio">
    <input type="radio" name="search-scope" id="search-scope-product" value="product">
    <label for="search-scope-product">Search this product</label>
  </div>
  <div class="radio">
    <input type="radio" name="search-scope" id="search-scope-category" value="category">
    <label for="search-scope-category">Search this category</label>
  </div>
  <div class="radio">
    <input type="radio" name="search-scope" id="search-scope-all" value="all">
    <label for="search-scope-all">Search All Documentation</label>
  </div>
</form>

Text Content

 * Skip to Content
 * Skip to Search
 * Home

   
 * Cloud Applications
    * Cloud Applications
    * Fusion Applications Suite
    * NetSuite Applications
    * Advertising

   
 * Cloud Infrastructure
    * Cloud Infrastructure
    * Get Started
    * Free Tier
    * Government Cloud
    * Services
    * Developer Resources
    * Security
    * More Resources
    * Launch Infrastructure Console
    * All Cloud Infrastructure

   
 * On-Premise Applications
    * On-Premise Applications
    * Fusion Applications On Premise
    * Enterprise Performance Management
    * E-Business
    * PeopleSoft
    * Siebel
    * JD Edwards
    * All Applications

   
 * Middleware
    * Middleware
    * Business Intelligence
    * Data Integrator
    * Enterprise Manager
    * GoldenGate
    * Identity Management
    * JavaScript Extension Toolkit
    * Platform Security Services
    * SOA Suite
    * Web Center
    * WebLogic Server
    * All Middleware

   
 * Database
    * Database
    * Big Data
    * Enterprise Manager
    * Oracle Database
    * Other Databases
    * All Database

   
 * Engineered Systems
    * Engineered Systems
    * Advanced Support Gateway
    * Autonomous Health Checks and Diagnostics
    * Big Data Appliance
    * Database Appliance
    * Enterprise Manager
    * Exadata Database Machine
    * Exalogic Elastic Cloud
    * Exalytics In-Memory Machine
    * MiniCluster
    * Private Cloud Appliance
    * SuperCluster
    * Zero Data Loss Recovery Appliance
    * ZFS Storage Appliance
    * All Engineered Systems

   
 * Java
    * Java
    * Java EE
    * Java Embedded
    * Java SE
    * All Java

   
 * Hardware
    * Hardware
    * Networking
    * Servers
    * Storage
    * All Hardware

   
 * Operating Systems
    * Operating Systems
    * Developer Studio
    * Linux
    * Solaris
    * Solaris Cluster
    * All Operating Systems

   
 * Virtualization
    * Virtualization
    * Oracle Linux Virtualization Manager
    * Oracle VM
    * Oracle VM VirtualBox
    * Secure Global Desktop
    * All Virtualization

   
 * Industries
    * Industries
    * Communications
    * Construction and Engineering
    * Financial Services
    * Food and Beverage
    * Health Sciences
    * Hospitality
    * Insurance
    * Public Sector
    * Retail
    * Utilities
    * All Industries

   
 * Architecture Center
    * Architecture Center
    * Reference Architectures
    * Solution Playbooks
    * Built & Deployed
    * All Architecture Center

   
 * Tutorials and Labs
   
   --------------------------------------------------------------------------------

   
 * Products A-Z
   
 * Help Center Home
   
 * Oracle.com Home
   
   --------------------------------------------------------------------------------

 * Get started with Cloud

Help Center


Administering Oracle GoldenGate

No matching results
Try a different search query.
Search Unavailable
We are making updates to our Search system right now. Please try again later.

   
 * Sign In Account
   
   
   ORACLE ACCOUNT
   
    * Account
    * Help
    * Sign Out
   
   ORACLE ACCOUNT
   
   Manage your account and access personalized content. Sign up for an Oracle
   Account
   
   Sign in to my Account
   
   SIGN IN TO CLOUD
   
   Access your cloud dashboard, manage orders, and more. Free Cloud Platform
   Trial
   
   Sign in to Cloud

You are viewing an older release.
View LatestClose this notice
 1. Middleware
 2. GoldenGate
 3. Oracle GoldenGate
 4. Release 19.1


ADMINISTERING ORACLE GOLDENGATE

 * Table of Contents
 * Search
 * Download


TABLE OF CONTENTS

Expand AllCollapse All
 * Title and Copyright Information
 * 
   Preface
   * Audience
   * Documentation Accessibility
   * Related Information
   * Conventions
 * 1 Oracle GoldenGate Administration Overview
 * 
   Part I Oracle GoldenGate Administration: Common Components and Operations
   * 
     2 Oracle GoldenGate Globalization Support
     * 
       2.1 Preserving the Character Set
       * 2.1.1 Character Set of Database Structural Metadata
       * 2.1.2 Character Set of Character-type Data
       * 2.1.3 Character Set of Database Connection
       * 2.1.4 Character Set of Text Input and Output
     * 2.2 Using Unicode and Native Characters
   * 
     Using Oracle GoldenGate Parameter Files
     * Globalization Support for Parameter Files
     * Working with the GLOBALS File
     * Working with Runtime Parameters
     * 
       Creating a Parameter File
       * Creating a Parameter File in GGSCI
       * Creating a Parameter File with a Text Editor
     * Validating a Parameter File
     * Viewing a Parameter File
     * Changing a Parameter File
     * 
       Simplifying the Creation of Parameter Files
       * Using Wildcards
       * Using OBEY
       * Using Macros
       * Using Parameter Substitution
     * Getting Information about Oracle GoldenGate Parameters
   * 
     3 Using Oracle GoldenGate for Live Reporting
     * 
       3.1 Overview of the Reporting Configuration
       * 3.1.1 Filtering and Conversion
       * 3.1.2 Read-only vs. High Availability
       * 3.1.3 Additional Information
     * 
       3.2 Creating a Standard Reporting Configuration
       * 3.2.1 Source System
       * 3.2.2 Target System
     * 
       3.3 Creating a Reporting Configuration with a Data Pump on the Source
       System
       * 3.3.1 Source System
       * 3.3.2 Target System
     * 
       3.4 Creating a Reporting Configuration with a Data Pump on an
       Intermediary System
       * 3.4.1 Source System
       * 3.4.2 Intermediary System
       * 3.4.3 Target System
     * 
       3.5 Creating a Cascading Reporting Configuration
       * 3.5.1 Source System
       * 3.5.2 Second System in the Cascade
       * 3.5.3 Third System in the Cascade
   * 
     4 Using Oracle GoldenGate for Real-time Data Distribution
     * 4.1 Overview of the Data-distribution Configuration
     * 
       4.2 Considerations for a Data-distribution Configuration
       * 4.2.1 Fault Tolerance
       * 4.2.2 Filtering and Conversion
       * 4.2.3 Read-only vs. High Availability
       * 4.2.4 Additional Information
     * 
       4.3 Creating a Data Distribution Configuration
       * 4.3.1 Source System
       * 4.3.2 Target Systems
   * 
     5 Configuring Oracle GoldenGate for Real-time Data Warehousing
     * 5.1 Overview of the Data Warehousing Configuration
     * 
       5.2 Considerations for a Data Warehousing Configuration
       * 5.2.1 Isolation of Data Records
       * 5.2.2 Data Storage
       * 5.2.3 Filtering and Conversion
       * 5.2.4 Additional Information
     * 
       5.3 Creating a Data Warehousing Configuration
       * 5.3.1 Source Systems
       * 5.3.2 Target System
   * 
     6 Configuring Oracle GoldenGate to Maintain a Live Standby Database
     * 6.1 Overview of a Live Standby Configuration
     * 
       6.2 Considerations for a Live Standby Configuration
       * 6.2.1 Trusted Source
       * 6.2.2 Duplicate Standby
       * 6.2.3 DML on the Standby System
       * 6.2.4 Oracle GoldenGate Processes
       * 6.2.5 Backup Files
       * 6.2.6 Failover Preparedness
       * 6.2.7 Sequential Values that are Generated by the Database
       * 6.2.8 Additional Information
     * 
       6.3 Creating a Live Standby Configuration
       * 6.3.1 Prerequisites on Both Systems
       * 6.3.2 Configuration from Active Source to Standby
     * 6.4 Configuration from Standby to Active Source
     * 
       6.5 Moving User Activity in a Planned Switchover
       * 6.5.1 Moving User Activity to the Live Standby
       * 6.5.2 Moving User Activity Back to the Primary System
     * 
       6.6 Moving User Activity in an Unplanned Failover
       * 6.6.1 Moving User Activity to the Live Standby
       * 6.6.2 Moving User Activity Back to the Primary System
   * 
     7 Configuring Oracle GoldenGate for Active-Active Configuration
     * 7.1 Overview of an Active-Active Configuration
     * 
       7.2 Considerations for an Active-Active Configuration
       * 7.2.1 TRUNCATES
       * 7.2.2 Application Design
       * 7.2.3 Keys
       * 7.2.4 Triggers and Cascaded Deletes
       * 7.2.5 Database-Generated Values
       * 7.2.6 Database Configuration
     * 
       7.3 Preventing Data Looping
       * 
         7.3.1 Identifying Replicat Transactions
         * 7.3.1.1 DB2 z/OS, DB2 LUW, and DB2 for i
         * 7.3.1.2 MySQL
         * 7.3.1.3 SQL Server
         * 7.3.1.4 Oracle
       * 
         7.3.2 Preventing the Capture of Replicat Operations
         * 7.3.2.1 Preventing the Capture of Replicat Transactions (Oracle)
         * 7.3.2.2 Preventing Capture of Replicat Transactions (Other Databases)
       * 7.3.3 Replicating DDL in a Bi-directional Configuration
     * 7.4 Managing Conflicts
     * 7.5 Additional Information
     * 
       7.6 Creating an Active-Active Configuration
       * 7.6.1 Prerequisites on Both Systems
       * 7.6.2 Configuration from Primary System to Secondary System
       * 7.6.3 Configuration from Secondary System to Primary System
     * 
       7.7 Configuring Conflict Detection and Resolution
       * 7.7.1 Overview of the Oracle GoldenGate CDR Feature
       * 
         7.7.2 Configuring the Oracle GoldenGate Parameter Files for Error
         Handling
         * 7.7.2.1 Tools for Mapping Extra Data to the Exceptions Table
         * 7.7.2.2 Sample Exceptions Mapping with Source and Target Columns Only
         * 7.7.2.3 Sample Exceptions Mapping with Additional Columns in the
           Exceptions Table
       * 7.7.3 Configuring the Oracle GoldenGate Parameter Files for Conflict
         Resolution
       * 7.7.4 Making the Required Column Values Available to Extract
       * 
         7.7.5 Configuring Oracle GoldenGate CDR
         * 
           7.7.5.1 Viewing CDR Statistics
           * 7.7.5.1.1 Report File
           * 7.7.5.1.2 GGSCI
           * 7.7.5.1.3 Column-conversion Functions
       * 
         7.7.6 CDR Example 1: All Conflict Types with USEMAX, OVERWRITE, DISCARD
         * 7.7.6.1 Table Used in this Example
         * 7.7.6.2 MAP Statement with Conflict Resolution Specifications
         * 7.7.6.3 Description of MAP Statement
         * 7.7.6.4 Error Handling
         * 7.7.6.5 INSERTROWEXISTS with the USEMAX Resolution
         * 7.7.6.6 UPDATEROWEXISTS with the USEMAX Resolution
         * 7.7.6.7 UPDATEROWMISSING with OVERWRITE Resolution
         * 7.7.6.8 DELETEROWMISSING with DISCARD Resolution
         * 7.7.6.9 DELETEROWEXISTS with OVERWRITE Resolution
       * 
         7.7.7 CDR Example 2: UPDATEROWEXISTS with USEDELTA and USEMAX
         * 7.7.7.1 Table Used in this Example
         * 7.7.7.2 MAP Statement
         * 7.7.7.3 Description of MAP Statement
         * 7.7.7.4 Error Handling
       * 
         7.7.8 CDR Example 3: UPDATEROWEXISTS with USEDELTA, USEMAX, and IGNORE
         * 7.7.8.1 Table Used in this Example
         * 7.7.8.2 MAP Statement
         * 7.7.8.3 Description of MAP Statement
         * 7.7.8.4 Error Handling
   * 
     8 Mapping and Manipulating Data
     * 8.1 Guidelines for Using Self-describing Trails
     * 8.2 Parameters that Control Mapping and Data Integration
     * 8.3 Mapping between Dissimilar Databases
     * 
       8.4 Deciding Where Data Mapping and Conversion Will Take Place
       * 8.4.1 Mapping and Conversion on Windows and UNIX Systems
       * 8.4.2 Mapping and Conversion on NonStop Systems
     * 
       8.5 Globalization Considerations when Mapping Data
       * 
         8.5.1 Conversion between Character Sets
         * 8.5.1.1 Database Object Names
         * 8.5.1.2 Column Data
       * 8.5.2 Preservation of Locale
       * 8.5.3 Support for Escape Sequences
     * 
       8.6 Mapping Columns Using TABLE and MAP
       * 8.6.1 Supporting Case and Special Characters in Column Names
       * 
         8.6.2 Configuring Table-level Column Mapping with COLMAP
         * 8.6.2.1 Using USEDEFAULTS to Enable Default Column Mapping
         * 8.6.2.2 Specifying the Columns to be Mapped in the COLMAP Clause
       * 8.6.3 Configuring Global Column Mapping with COLMATCH
       * 8.6.4 Understanding Default Column Mapping
       * 
         8.6.5 Data Type Conversions
         * 8.6.5.1 Numeric Columns
         * 8.6.5.2 Character-type Columns
         * 8.6.5.3 Datetime Columns
     * 
       8.7 Selecting and Filtering Rows
       * 8.7.1 Selecting Rows with a FILTER Clause
       * 8.7.2 Selecting Rows with a WHERE Clause
       * 
         8.7.3 Considerations for Selecting Rows with FILTER and WHERE
         * 8.7.3.1 Ensuring Data Availability for Filters
         * 8.7.3.2 Comparing Column Values
         * 8.7.3.3 Testing for NULL Values
     * 8.8 Retrieving Before and After Values
     * 8.9 Selecting Columns
     * 8.10 Selecting and Converting SQL Operations
     * 8.11 Using Transaction History
     * 
       8.12 Testing and Transforming Data
       * 8.12.1 Handling Column Names and Literals in Functions
       * 8.12.2 Using the Appropriate Function
       * 8.12.3 Transforming Dates
       * 
         8.12.4 Performing Arithmetic Operations
         * 8.12.4.1 Omitting @COMPUTE
       * 8.12.5 Manipulating Numbers and Character Strings
       * 
         8.12.6 Handling Null, Invalid, and Missing Data
         * 8.12.6.1 Using @COLSTAT
         * 8.12.6.2 Using @COLTEST
         * 8.12.6.3 Using @IF
       * 
         8.12.7 Performing Tests
         * 8.12.7.1 Using @CASE
         * 8.12.7.2 Using @VALONEOF
         * 8.12.7.3 Using @EVAL
     * 
       8.13 Using Tokens
       * 8.13.1 Defining Tokens
       * 8.13.2 Using Token Data in Target Tables
 * 
   Part II Administering Oracle GoldenGate Microservices Architecture
   * 9 Loading Data from File to Replicat in Microservices Architecture
 * 
   Part III Administering Oracle GoldenGate Classic Architecture
   * 
     10 Configuring Manager and Network Communications
     * 10.1 Overview of the Manager Process
     * 10.2 Assigning Manager a Port for Local Communication
     * 10.3 Maintaining Ports for Remote Connections through Firewalls
     * 10.4 Choosing an Internet Protocol
     * 10.5 Using the Recommended Manager Parameters
     * 10.6 Creating the Manager Parameter File
     * 
       10.7 Starting Manager
       * 10.7.1 Starting Manager from the Command Shell of the Operating System
       * 10.7.2 Starting Manager from GGSCI
     * 
       10.8 Stopping Manager
       * 10.8.1 Stopping Manager on UNIX and Linux
       * 10.8.2 Stopping Manager on Windows
   * 
     11 Getting Started with the Oracle GoldenGate Process Interfaces
     * 
       11.1 Using GGSCI and Admin Client Command Line Interfaces
       * 11.1.1 Using Wildcards in Command Arguments
       * 11.1.2 Globalization Support for the Command Interface
       * 11.1.3 Using Command History
       * 11.1.4 Storing and Calling Frequently Used Command Sequences
     * 
       11.2 Controlling Oracle GoldenGate Processes
       * 11.2.1 Controlling Manager
       * 11.2.2 Controlling Extract and Replicat
       * 11.2.3 Deleting Extract and Replicat
     * 
       11.3 Automating Commands
       * 11.3.1 Issuing Commands Through the IBM i CLI
     * 
       11.4 Specifying Object Names in Oracle GoldenGate Input
       * 11.4.1 Specifying Filesystem Path Names in Parameter Files on Windows
         Systems
       * 
         11.4.2 Supported Database Object Names
         * 11.4.2.1 Supported Special Characters
         * 11.4.2.2 Non-supported Special Characters
       * 11.4.3 Specifying Names that Contain Slashes
       * 
         11.4.4 Qualifying Database Object Names
         * 11.4.4.1 Two-part Names
         * 11.4.4.2 Three-part Names
         * 11.4.4.3 Applying Data from Multiple Containers or Catalogs
         * 11.4.4.4 Specifying a Default Container or Catalog
       * 11.4.5 Specifying Case-Sensitive Database Object Names
       * 
         11.4.6 Using Wildcards in Database Object Names
         * 11.4.6.1 Rules for Using Wildcards for Source Objects
         * 11.4.6.2 Rules for Using Wildcards for Target Objects
         * 11.4.6.3 Fallback Name Mapping
         * 11.4.6.4 Wildcard Mapping from Pre-11.2.1 Trail Version
         * 11.4.6.5 Asterisks or Question Marks as Literals in Object Names
         * 11.4.6.6 How Wildcards are Resolved
         * 11.4.6.7 Excluding Objects from a Wildcard Specification
       * 11.4.7 Differentiating Case-Sensitive Column Names from Literals
   * 
     12 Configuring Online Change Synchronization
     * 
       12.1 Overview of Online Change Synchronization
       * 12.1.1 Initial Synchronization
     * 
       12.2 Choosing Names for Processes and Files
       * 12.2.1 Naming Conventions for Processes
       * 12.2.2 Choosing File Names
     * 
       12.3 Creating a Checkpoint Table
       * 12.3.1 Options for Creating the Checkpoint Table
       * 12.3.2 Adjusting for Coordinated Replicat in Oracle RAC
     * 12.4 Creating an Online Extract Group
     * 
       12.5 Creating a Trail
       * 12.5.1 Assigning Storage for Oracle GoldenGate Trails
       * 12.5.2 Estimating Space for the Trails
       * 12.5.3 Adding a Trail
     * 12.6 Creating a Parameter File for Online Extraction
     * 
       12.7 Creating an Online Replicat Group
       * 12.7.1 About Classic Replicat Mode
       * 
         12.7.2 About Coordinated Replicat Mode
         * 12.7.2.1 About Barrier Transactions
         * 12.7.2.2 How Barrier Transactions are Processed
       * 12.7.3 About Integrated Replicat Mode
       * 12.7.4 About Parallel Replicat Mode
       * 12.7.5 Understanding Replicat Processing in Relation to Parameter
         Changes
       * 12.7.6 About the Global Watermark
       * 12.7.7 Creating the Replicat Group
     * 12.8 Creating a Parameter File for Online Replication
   * 
     13 Handling Processing Errors
     * 13.1 Overview of Oracle GoldenGate Error Handling
     * 13.2 Handling Extract Errors
     * 
       13.3 Handling Replicat Errors during DML Operations
       * 
         13.3.1 Handling Errors as Exceptions
         * 13.3.1.1 Using EXCEPTIONSONLY
         * 13.3.1.2 Using MAPEXCEPTION
         * 13.3.1.3 About the Exceptions Table
     * 13.4 Handling Replicat errors during DDL Operations
     * 13.5 Handling TCP/IP Errors
     * 13.6 Maintaining Updated Error Messages
     * 13.7 Resolving Oracle GoldenGate Errors
   * 
     14 Instantiating Oracle GoldenGate with an Initial Load
     * 
       14.1 Overview of the Initial-Load Procedure
       * 14.1.1 Improving the Performance of an Initial Load
       * 
         14.1.2 Prerequisites for Initial Load
         * 14.1.2.1 Disable DDL Processing
         * 14.1.2.2 Prepare the Target Tables
         * 14.1.2.3 Configure the Manager Process
         * 14.1.2.4 Create a Data-definitions File
         * 14.1.2.5 Create Change-synchronization Groups
         * 14.1.2.6 Sharing Parameters between Process Groups
     * 
       14.2 Initial Load in Classic Architecture
       * 
         14.2.1 Loading Data with Oracle Data Pump
         * 14.2.1.1 Using Automatic Per Table Instantiation
         * 14.2.1.2 Using Oracle Data Pump Table Instantiation
       * 14.2.2 Loading Data from File to Replicat
       * 14.2.3 Loading Data with an Oracle GoldenGate Direct Load
       * 14.2.4 Loading Data with a Direct Bulk Load to SQL*Loader
   * 
     15 Customizing Oracle GoldenGate Processing
     * 
       15.1 Executing Commands, Stored Procedures, and Queries with SQLEXEC
       * 15.1.1 Performing Processing with SQLEXEC
       * 15.1.2 Using SQLEXEC
       * 15.1.3 Executing SQLEXEC within a TABLE or MAP Statement
       * 15.1.4 Executing SQLEXEC as a Standalone Statement
       * 
         15.1.5 Using Input and Output Parameters
         * 15.1.5.1 Passing Values to Input Parameters
         * 15.1.5.2 Passing Values to Output Parameters
         * 15.1.5.3 SQLEXEC Examples Using Parameters
       * 
         15.1.6 Handling SQLEXEC Errors
         * 15.1.6.1 Handling Missing Column Values
         * 15.1.6.2 Handling Database Errors
       * 15.1.7 Additional SQLEXEC Guidelines
     * 
       15.2 Using Oracle GoldenGate Macros to Simplify and Automate Work
       * 15.2.1 Defining a Macro
       * 
         15.2.2 Calling a Macro
         * 15.2.2.1 Calling a Macro that Contains Parameters
         * 15.2.2.2 Calling a Macro without Input Parameters
       * 15.2.3 Calling Other Macros from a Macro
       * 15.2.4 Creating Macro Libraries
       * 15.2.5 Tracing Macro Expansion
     * 
       15.3 Using User Exits to Extend Oracle GoldenGate Capabilities
       * 15.3.1 When to Implement User Exits
       * 15.3.2 Making Oracle GoldenGate Record Information Available to the
         Routine
       * 15.3.3 Creating User Exits
       * 15.3.4 Supporting Character-set Conversion in User Exits
       * 15.3.5 Using Macros to Check Name Metadata
       * 15.3.6 Describing the Character Format
       * 15.3.7 Upgrading User Exits
       * 15.3.8 Viewing Examples of How to Use the User Exit Functions
     * 
       15.4 Using the Oracle GoldenGate Event Marker System to Raise Database
       Events
       * 
         15.4.1 Case Studies in the Usage of the Event Marker System
         * 15.4.1.1 Trigger End-of-day Processing
         * 15.4.1.2 Simplify Transition from Initial Load to Change
           Synchronization
         * 15.4.1.3 Stop Processing When Data Anomalies are Encountered
         * 15.4.1.4 Trace a Specific Order Number
         * 15.4.1.5 Execute a Batch Process
         * 15.4.1.6 Propagate Only a SQL Statement without the Resultant
           Operations
         * 15.4.1.7 Committing Other Transactions Before Starting a Long-running
           Transaction
         * 15.4.1.8 Execute a Shell Script to Validate Data
   * 
     16 Monitoring Oracle GoldenGate Processing
     * 16.1 Using the Information Commands in GGSCI
     * 16.2 Monitoring an Extract Recovery
     * 
       16.3 Monitoring Lag
       * 16.3.1 About Lag
       * 16.3.2 Controlling How Lag is Reported
     * 
       16.4 Using Automatic Heartbeat Tables to Monitor
       * 16.4.1 Understanding Heartbeat Table End-To-End Replication Flow
       * 16.4.2 Updating Heartbeat Tables
       * 16.4.3 Purging the Heartbeat History Tables
       * 16.4.4 Best Practice
       * 16.4.5 Using the Automatic Heartbeat Commands
     * 16.5 Monitoring Processing Volume
     * 16.6 Using the Error Log
     * 
       16.7 Using the Process Report
       * 16.7.1 Scheduling Runtime Statistics in the Process Report
       * 16.7.2 Viewing Record Counts in the Process Report
       * 16.7.3 Preventing SQL Errors from Filling the Replicat Report File
     * 16.8 Using the Discard File
     * 16.9 Maintaining the Discard and Report Files
     * 16.10 Reconciling Time Differences
     * 16.11 Getting Help with Performance Tuning
   * 
     17 Tuning the Performance of Oracle GoldenGate
     * 
       17.1 Using Multiple Process Groups
       * 
         17.1.1 Considerations for Using Multiple Process Groups
         * 17.1.1.1 Maintaining Data Integrity
         * 17.1.1.2 Number of Groups
         * 17.1.1.3 Memory
         * 17.1.1.4 Isolating Processing-Intensive Tables
       * 
         17.1.2 Using Parallel Replicat Groups on a Target System
         * 17.1.2.1 To Create the Extract Group
         * 17.1.2.2 To Create the Replicat Groups
       * 
         17.1.3 Using Multiple Extract Groups with Multiple Replicat Groups
         * 17.1.3.1 To Create the Extract Groups
         * 17.1.3.2 To Create the Replicat Groups
     * 17.2 Splitting Large Tables Into Row Ranges Across Process Groups
     * 
       17.3 Configuring Oracle GoldenGate to Use the Network Efficiently
       * 17.3.1 Detecting a Network Bottleneck that is Affecting Oracle
         GoldenGate
       * 17.3.2 Working Around Bandwidth Limitations by Using Data Pumps
       * 17.3.3 Increasing the TCP/IP Packet Size
     * 
       17.4 Eliminating Disk I/O Bottlenecks
       * 17.4.1 Improving I/O performance Within the System Configuration
       * 17.4.2 Improving I/O Performance Within the Oracle GoldenGate
         Configuration
     * 17.5 Managing Virtual Memory and Paging
     * 17.6 Optimizing Data Filtering and Conversion
     * 
       17.7 Tuning Replicat Transactions
       * 17.7.1 Tuning Coordination Performance Against Barrier Transactions
       * 17.7.2 Applying Similar SQL Statements in Arrays
       * 17.7.3 Preventing Full Table Scans in the Absence of Keys
       * 17.7.4 Splitting Large Transactions
       * 17.7.5 Adjusting Open Cursors
       * 17.7.6 Improving Update Speed
       * 17.7.7 Set a Replicat Transaction Timeout
     * 
       17.8 Using Healthcheck Scripts to Monitor and Troubleshoot
       * 17.8.1 Installing, Running, and Uninstalling Healthcheck Scripts
       * 17.8.2 How to Deal with Healthcheck Information?
       * 17.8.3 Components of Healthcheck Information
   * 
     18 Performing Administrative Operations
     * 18.1 Performing Application Patches
     * 18.2 Initializing the Transaction Logs
     * 18.3 Shutting Down the System
     * 
       18.4 Changing Database Attributes
       * 18.4.1 Changing Database Metadata
       * 18.4.2 Adding Tables to the Oracle GoldenGate Configuration
       * 18.4.3 Coordinating Table Attributes between Source and Target
       * 18.4.4 Performing an ALTER TABLE to Add a Column on DB2 z/OS Tables
       * 18.4.5 Dropping and Recreating a Source Table
       * 18.4.6 Changing the Number of Oracle RAC Threads when Using Classic
         Capture
       * 18.4.7 Changing the ORACLE_SID
       * 18.4.8 Purging Archive Logs
       * 18.4.9 Reorganizing a DB2 Table (z/OS Platform)
     * 
       18.5 Adding Process Groups to an Active Configuration
       * 18.5.1 Before You Start
       * 18.5.2 Adding Another Extract Group to an Active Configuration
       * 18.5.3 Adding Another Data Pump to an Active Configuration
       * 18.5.4 Adding Another Replicat Group to an Active Configuration
     * 18.6 Changing the Size of Trail Files
     * 18.7 Switching Extract from Classic Mode to Integrated Mode
     * 18.8 Switching Extract from Integrated Mode to Classic Mode
     * 18.9 Switching Replicat from Non-Integrated Mode to Integrated Mode
     * 18.10 Switching Replicat from Integrated Mode to Non-Integrated Mode
     * 
       18.11 Switching Replicat to Coordinated Mode
       * 18.11.1 Procedure Overview
       * 18.11.2 Performing the Switch to Coordinated Replicat
     * 
       18.12 Administering a Coordinated Replicat Configuration
       * 18.12.1 Performing a Planned Re-partitioning of the Workload
       * 
         18.12.2 Recovering Replicat After an Unplanned Re-partitioning
         * 18.12.2.1 Reprocessing From the Low Watermark with HANDLECOLLISIONS
         * 18.12.2.2 Using the Auto-Saved Parameter File
     * 18.13 Synchronizing Threads After an Unclean Stop
     * 
       18.14 Restarting a Primary Extract after System Failure or Corruption
       * 18.14.1 Details of This Procedure
       * 18.14.2 Performing the Recovery
     * 18.15 Using Automatic Trail File Recovery
 * 
   A Supported Character Sets
   * A.1 Supported Character Sets - Oracle
   * A.2 Supported Character Sets - Non-Oracle
 * B Supported Locales
 * 
   C About the Oracle GoldenGate Trail
   * C.1 Trail Recovery Mode
   * C.2 Trail File Header Record
   * C.3 Trail Record Format
   * C.4 Example of an Oracle GoldenGate Record
   * 
     C.5 Record Header Area
     * C.5.1 Description of Header Fields
     * C.5.2 Using Header Data
   * 
     C.6 Record Data Area
     * C.6.1 Full Record Image Format (NonStop Sources)
     * C.6.2 Compressed Record Image Format (Windows, UNIX, Linux Sources)
   * C.7 Tokens Area
   * C.8 Oracle GoldenGate Operation Types
   * C.9 Oracle GoldenGate Trail Header Record
 * D Using the Commit Sequence Number
 * 
   E About Checkpoints
   * 
     E.1 Extract Checkpoints
     * 
       E.1.1 About Extract read checkpoints
       * E.1.1.1 Startup Checkpoint
       * E.1.1.2 Recovery Checkpoint
       * E.1.1.3 Current Checkpoint
     * E.1.2 About Extract Write Checkpoints
   * 
     E.2 Replicat Checkpoints
     * 
       E.2.1 About Replicat Checkpoints
       * E.2.1.1 Startup Checkpoint
       * E.2.1.2 Current Checkpoint
   * E.3 Internal Checkpoint Information
   * E.4 Oracle GoldenGate Checkpoint Tables


SEARCH

Search
Search this book
Search this product
Search this category
Search All Documentation


DOWNLOAD

PDF for offline and print


 * Previous
 * Table of contents
 * Next

 1. Administering Oracle GoldenGate
 2. Administering Oracle GoldenGate Classic Architecture
 3. Tuning the Performance of Oracle GoldenGate


17 TUNING THE PERFORMANCE OF ORACLE GOLDENGATE

This chapter contains suggestions for improving the performance of Oracle
GoldenGate components.

Topics:

 * Using Multiple Process Groups
   
 * Splitting Large Tables Into Row Ranges Across Process Groups
   
 * Configuring Oracle GoldenGate to Use the Network Efficiently
   
 * Eliminating Disk I/O Bottlenecks
   
 * Managing Virtual Memory and Paging
   
 * Optimizing Data Filtering and Conversion
   
 * Tuning Replicat Transactions
   
 * Using Healthcheck Scripts to Monitor and Troubleshoot
   Oracle GoldenGate Healthcheck script provides database site information for
   Oracle Databases to allow monitoring and troubleshooting.

Parent topic: Administering Oracle GoldenGate Classic Architecture


17.1 USING MULTIPLE PROCESS GROUPS

Typically, only one Extract group is required to efficiently capture from a
database. However, depending on the redo (transactional) values, or the data and
operation types, you may find that you are required to add one or more Extract
group to the configuration.

Similarly, only one Replicat group is typically needed to apply data to a target
database if using Replicat in coordinated mode. (See About Coordinated Replicat
Mode for more information.) However, even in some cases when using Replicat in
coordinated mode, you may be required to use multiple Replicat groups. If you
are using Replicat in classic mode and your applications generate a high
transaction volume, you probably will need to use parallel Replicat groups.

Because each Oracle GoldenGate component — Extract, data pump, trail, Replicat —
is an independent module, you can combine them in ways that suit your needs. You
can use multiple trails and parallel Extract and Replicat processes (with or
without data pumps) to handle large transaction volume, improve performance,
eliminate bottlenecks, reduce latency, or isolate the processing of specific
data.

Figure 17-1 shows some of the ways that you can configure Oracle GoldenGate to
improve throughput speed and overcome network bandwidth issues.

Figure 17-1 Load-balancing configurations that improve performance

The image labels imply the following:

 * A: Parallel Extracts divide the load. For example, by schema or to isolate
   tables that generate fetches.

 * B: A data pump with local trail can be used for filtering, conversion, and
   network false tolerance.

 * C: Multiple data pumps work around network per-process bandwidth limitations
   to enable TCP/IP throughput. Divide the TABLE parameter statements among
   them.

 * D: Parallel Replicats increase throughput to the database. Any trail can be
   read by one or more Replicats. Divide MAP statements among them.

 * Considerations for Using Multiple Process Groups
   
 * Using Parallel Replicat Groups on a Target System
   
 * Using Multiple Extract Groups with Multiple Replicat Groups
   

Parent topic: Tuning the Performance of Oracle GoldenGate

17.1.1 CONSIDERATIONS FOR USING MULTIPLE PROCESS GROUPS

Before configuring multiple processing groups, review the following
considerations to ensure that your configuration produces the desired results
and maintains data integrity.

 * Maintaining Data Integrity
   
 * Number of Groups
   
 * Memory
   
 * Isolating Processing-Intensive Tables
   

Parent topic: Using Multiple Process Groups

17.1.1.1 MAINTAINING DATA INTEGRITY

Not all workloads can be partitioned across multiple groups and still preserve
the original transaction atomicity. You must determine whether the objects in
one group will ever have dependencies on objects in any other group,
transactional or otherwise. For example, tables for which the workload routinely
updates the primary key cannot easily be partitioned in this manner. DDL
replication (if supported for the database) is not viable in this mode, nor is
the use of some SQLEXEC or EVENTACTIONS features that base their actions on a
specific record.

If your tables do not have any foreign- key dependencies or updates to primary
keys, you may be able to use multiple processes. Keep related DML together in
the same process stream to ensure data integrity.

Parent topic: Considerations for Using Multiple Process Groups

17.1.1.2 NUMBER OF GROUPS

The number of concurrent Extract and Replicat process groups that can run on a
system depends on how much system memory is available. Each Classic Extract and
Replicat process needs approximately 25-55 MB of memory or more, depending on
the size of the transactions and the number of concurrent transactions. The
Oracle GoldenGate GGSCI command interface fully supports up to 5,000 concurrent
Extract and Replicat groups (combined) per instance of Oracle GoldenGate
Manager. At the supported level, all groups can be controlled and viewed in full
with GGSCI commands such as the INFO and STATUS commands. Beyond the supported
level, group information is not displayed and errors may occur. Oracle
GoldenGate recommends keeping the number of Extract and Replicat groups
(combined) at a more manageable level, such as 100 or below, in order to manage
your environment effectively. The maximum number of groups is controlled by the
MAXGROUPS parameter, whose default is 1000.

For Windows Server environments, the number of process groups that can be run
are tightly coupled to the ‘non-interactive’ Windows desktop heap memory
settings. The default settings for Windows desktop heap may be enough to run
very small numbers of process groups, but as you approach larger amounts of
process groups, more than 60 or so, you will either need to adjust the
‘non-interactive’ value of the SharedSection field in the registry, based on
this information from Microsoft (Windows desktop heap memory), or increase the
number of Oracle GoldenGate homes and spread the total number of desired process
groups across these homes.

Note:

For more information on modifying the Windows Desktop Heap memory, review the
following Oracle Knowledge Base document (Doc ID 2056225.1).

Parent topic: Considerations for Using Multiple Process Groups

17.1.1.3 MEMORY

The system must have sufficient swap space for each Oracle GoldenGate Extract
and Replicat process that will be running. To determine the required swap space:

 1. Start up one Extract or Replicat.
 2. Run GGSCI.
 3. View the report file and find the line PROCESS VM AVAIL FROM OS (min).
 4. Round up the value to the next full gigabyte if needed. For example, round
    up 1.76GB to 2 GB.
 5. Multiply that value by the number of Extract and Replicat processes that
    will be running. The result is the maximum amount of swap space that could
    be required

See the CACHEMGR parameter in Reference for Oracle GoldenGate for more
information about how memory is managed.

Parent topic: Considerations for Using Multiple Process Groups

17.1.1.4 ISOLATING PROCESSING-INTENSIVE TABLES

You can use multiple process groups to support certain kinds of tables that tend
to interfere with normal processing and cause latency to build on the target.
For example:

 * Extract may need to perform a fetch from the database because of the data
   type of the column, because of parameter specifications, or to perform SQL
   procedures. When data must be fetched from the database, it affects the
   performance of Extract. You can get fetch statistics from the STATS EXTRACT
   command if you include the STATOPTIONS REPORTFETCH parameter in the Extract
   parameter file. You can then isolate those tables into their own Extract
   groups, assuming that transactional integrity can be maintained.

 * In its classic mode, Replicat process can be a source of performance
   bottlenecks because it is a single-threaded process that applies operations
   one at a time by using regular SQL. Even with BATCHSQL enabled (see Reference
   for Oracle GoldenGate) Replicat may take longer to process tables that have
   large or long-running transactions, heavy volume, a very large number of
   columns that change, and LOB data. You can then isolate those tables into
   their own Replicat groups, assuming that transactional integrity can be
   maintained.

Parent topic: Considerations for Using Multiple Process Groups

17.1.2 USING PARALLEL REPLICAT GROUPS ON A TARGET SYSTEM

This section contains instructions for creating a configuration that pairs one
Extract group with multiple Replicat groups. Although it is possible for
multiple Replicat processes to read a single trail (no more than three of them
to avoid disk contention) it is recommended that you pair each Replicat with its
own trail and corresponding Extract process.

For detailed instructions on configuring change synchronization, see Configuring
Online Change Synchronization.

Topics:

 * To Create the Extract Group
   
 * To Create the Replicat Groups
   

Parent topic: Using Multiple Process Groups

17.1.2.1 TO CREATE THE EXTRACT GROUP

Note:

This configuration includes Extract data-pumps.

 1. On the source, use the ADD EXTRACT command to create a primary Extract
    group.
 2. On the source, use the ADD EXTTRAIL command to specify as many local trails
    as the number of Replicat groups that you will be creating. All trails must
    be associated with the primary Extract group.
 3. On the source create a data-pump Extract group.
 4. On the source, use the ADD RMTTRAIL command to specify as many remote trails
    as the number of Replicat groups that you will be creating. All trails must
    be associated with the data-pump Extract group.
 5. On the source, use the EDIT PARAMS command to create Extract parameter
    files, one for the primary Extract and one for the data pump, that contain
    the parameters required for your database environment. When configuring
    Extract, do the following:
    
     * Divide the source tables among different TABLE parameters.
    
     * Link each TABLE statement to a different trail. This is done by placing
       the TABLE statements after the EXTTRAIL or RMTTRAIL parameter that
       specifies the trail you want those statements to be associated with.

Parent topic: Using Parallel Replicat Groups on a Target System

17.1.2.2 TO CREATE THE REPLICAT GROUPS

 1. On the target, create a Replicat checkpoint table. For instructions, see
    Creating a Checkpoint Table. All Replicat groups can use the same checkpoint
    table.
 2. On the target, use the ADD REPLICAT command to create a Replicat group for
    each trail that you created. Use the EXTTRAIL argument of ADD REPLICAT to
    link the Replicat group to the appropriate trail.
 3. On the target, use the EDIT PARAMS command to create a Replicat parameter
    file for each Replicat group that contains the parameters required for your
    database environment. All MAP statements for a given Replicat group must
    specify the same objects that are contained in the trail that is linked to
    that group.
 4. In the Manager parameter file on the target system, use the PURGEOLDEXTRACTS
    parameter to control the purging of files from the trails.

Parent topic: Using Parallel Replicat Groups on a Target System

17.1.3 USING MULTIPLE EXTRACT GROUPS WITH MULTIPLE REPLICAT GROUPS

Multiple Extract groups write to their own trails. Each trail is read by a
dedicated Replicat group.

For detailed instructions on configuring change synchronization, see Configuring
Online Change Synchronization.

 * To Create the Extract Groups
   
 * To Create the Replicat Groups
   

Parent topic: Using Multiple Process Groups

17.1.3.1 TO CREATE THE EXTRACT GROUPS

Note:

This configuration includes data pumps.

 1. On the source, use the ADD EXTRACT command to create the primary Extract
    groups.
 2. On the source, use the ADD EXTTRAIL command to specify a local trail for
    each of the Extract groups that you created.
 3. On the source create a data-pump Extract group to read each local trail that
    you created.
 4. On the source, use the ADD RMTTRAIL command to specify a remote trail for
    each of the data-pumps that you created.
 5. On the source, use the EDIT PARAMS command to create an Extract parameter
    file for each primary Extract group and each data-pump Extract group.

Parent topic: Using Multiple Extract Groups with Multiple Replicat Groups

17.1.3.2 TO CREATE THE REPLICAT GROUPS

 1. On the target, create a Replicat checkpoint table. For instructions, see
    Creating a Checkpoint Table. All Replicat groups can use the same checkpoint
    table.
 2. On the target, use the ADD REPLICAT command to create a Replicat group for
    each trail. Use the EXTTRAIL argument of ADD REPLICAT to link the group to
    the trail.
 3. On the target, use the EDIT PARAMS command to create a Replicat parameter
    file for each Replicat group. All MAP statements for a given Replicat group
    must specify the same objects that are contained in the trail that is linked
    to the group.
 4. In the Manager parameter files on the source system and the target system,
    use the PURGEOLDEXTRACTS parameter to control the purging of files from the
    trails.

Parent topic: Using Multiple Extract Groups with Multiple Replicat Groups


17.2 SPLITTING LARGE TABLES INTO ROW RANGES ACROSS PROCESS GROUPS

You can use the @RANGE function to divide the rows of any table across two or
more Oracle GoldenGate processes. It can be used to increase the throughput of
large and heavily accessed tables and also can be used to divide data into sets
for distribution to different destinations. Specify each range in a FILTER
clause in a TABLE or MAP statement.

@RANGE is safe and scalable. It preserves data integrity by guaranteeing that
the same row will always be processed by the same process group.

It might be more efficient to use the primary Extract or a data pump to
calculate the ranges than to use Replicat. To calculate ranges, Replicat must
filter through the entire trail to find data that meets the range specification.
However, your business case should determine where this filtering is performed.

Figure 17-2 Dividing rows of a table between two Extract groups


Description of "Figure 17-2 Dividing rows of a table between two Extract groups"

Figure 17-3 Dividing rows of a table between two Replicat groups


Description of "Figure 17-3 Dividing rows of a table between two Replicat
groups"

Parent topic: Tuning the Performance of Oracle GoldenGate


17.3 CONFIGURING ORACLE GOLDENGATE TO USE THE NETWORK EFFICIENTLY

Inefficiencies in the transfer of data across the network can cause lag in the
Extract process and latency on the target. If not corrected, it can eventually
cause process failures.

When you first start a new Oracle GoldenGate configuration:

 1. Establish benchmarks for what you consider to be acceptable lag and
    throughput volume for Extract and for Replicat. Keep in mind that Extract
    will normally be faster than Replicat because of the kind of tasks that each
    one performs. Over time you will know whether the difference is normal or
    one that requires tuning or troubleshooting.

 2. Set a regular schedule to monitor those processes for lag and volume, as
    compared to the benchmarks. Look for lag that remains constant or is
    growing, as opposed to occasional spikes. Continuous, excess lag indicates a
    bottleneck somewhere in the Oracle GoldenGate configuration. It is a
    critical first indicator that Oracle GoldenGate needs tuning or that there
    is an error condition.

To view volume statistics, use the STATS EXTRACT or STATS REPLICAT command. To
view lag statistics, use the LAG EXTRACT or LAG REPLICAT command.

Topics:

 * Detecting a Network Bottleneck that is Affecting Oracle GoldenGate
   
 * Working Around Bandwidth Limitations by Using Data Pumps
   
 * Increasing the TCP/IP Packet Size
   

Parent topic: Tuning the Performance of Oracle GoldenGate

17.3.1 DETECTING A NETWORK BOTTLENECK THAT IS AFFECTING ORACLE GOLDENGATE

To detect a network bottleneck that is affecting the throughput of Oracle
GoldenGate, follow these steps.

 1. Issue the following command to view the ten most recent Extract checkpoints.
    If you are using a data-pump Extract on the source system, issue the command
    for the primary Extract and also for the data pump.
    
    Copy
    INFO EXTRACT group, SHOWCH 10
    
    

 2. Look for the Write Checkpoint statistic. This is the place where Extract is
    writing to the trail.
    
    Copy
    Write Checkpoint #1
    
    GGS Log Trail
    Current Checkpoint (current write position):
       Sequence #: 2
       RBA: 2142224
       Timestamp: 2011-01-09 14:16:50.567638
       Extract Trail: ./dirdat/eh
    
    

 3. For both the primary Extract and data pump:
    
     * Determine whether there are more than one or two checkpoints. There can
       be up to ten.
    
     * Find the Write Checkpoint n heading that has the highest increment number
       (for example, Write Checkpoint #8) and make a note of the Sequence, RBA,
       and Timestamp values. This is the most recent checkpoint.

 4. Refer to the information that you noted, and make the following validation:
    
     * Is the primary Extract generating a series of checkpoints, or just the
       initial checkpoint?
    
     * If a data pump is in use, is it generating a series of checkpoints, or
       just one?

 5. Issue INFO EXTRACT for the primary and data pump Extract processes again.
    
     * Has the most recent write checkpoint increased? Look at the most recent
       Sequence, RBA, and Timestamp values to see if their values were
       incremented forward since the previous INFO EXTRACT command.

 6. Issue the following command to view the status of the Replicat process.
    
    Copy
    SEND REPLICAT group, STATUS
    
    
    
     * The status indicates whether Replicat is delaying (waiting for data to
       process), processing data, or at the end of the trail (EOF).

There is a network bottleneck if the status of Replicat is either in delay mode
or at the end of the trail file and either of the following is true:

 * You are only using a primary Extract and its write checkpoint is not
   increasing or is increasing too slowly. Because this Extract process is
   responsible for sending data across the network, it will eventually run out
   of memory to contain the backlog of extracted data and abend.

 * You are using a data pump, and its write checkpoint is not increasing, but
   the write checkpoint of the primary Extract is increasing. In this case, the
   primary Extract can write to its local trail, but the data pump cannot write
   to the remote trail. The data pump will abend when it runs out of memory to
   contain the backlog of extracted data. The primary Extract will run until it
   reaches the last file in the trail sequence and will abend because it cannot
   make a checkpoint.

Note:

Even when there is a network outage, Replicat will process in a normal manner
until it applies all of the remaining data from the trail to the target.
Eventually, it will report that it reached the end of the trail file.

Parent topic: Configuring Oracle GoldenGate to Use the Network Efficiently

17.3.2 WORKING AROUND BANDWIDTH LIMITATIONS BY USING DATA PUMPS

Using parallel data pumps may enable you to work around bandwidth limitations
that are imposed on a per-process basis in the network configuration. You can
use parallel data pumps to send data to the same target system or to different
target systems. Data pumps also remove TCP/IP responsibilities from the primary
Extract, and their local trails provide fault tolerance.

Parent topic: Configuring Oracle GoldenGate to Use the Network Efficiently

17.3.3 INCREASING THE TCP/IP PACKET SIZE

Use the TCPBUFSIZE option of the RMTHOST parameter to control the size of the
TCP socket buffer that Extract maintains. By increasing the size of the buffer,
you can send larger packets to the target system. See Reference for Oracle
GoldenGate for more information.

Use the following steps as a guideline to determine the optimum buffer size for
your network.

 1. Use the ping command from the command shell obtain the average round trip
    time (RTT), shown in the following example:
    
    Copy
    C:\home\ggs>ping ggsoftware.com
    Pinging ggsoftware.com [192.168.116.171] with 32 bytes of data:
    Reply from 192.168.116.171: bytes=32 time=31ms TTL=56
    Reply from 192.168.116.171: bytes=32 time=61ms TTL=56
    Reply from 192.168.116.171: bytes=32 time=32ms TTL=56
    Reply from 192.168.116.171: bytes=32 time=34ms TTL=56
    Ping statistics for 192.168.116.171:
        Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
    Approximate round trip times in milli-seconds:
        Minimum = 31ms, Maximum = 61ms, Average = 39ms
    
    

 2. Multiply that value by the network bandwidth. For example, if average RTT is
    .08 seconds, and the bandwidth is 100 megabits per second, then the optimum
    buffer size is:
    
    Copy
    0.08 second * 100 megabits per second = 8 megabits
    
    

 3. Divide the result by 8 to determine the number of bytes (8 bits to a byte).
    For example:
    
    Copy
    8 megabits / 8 = 1 megabyte per second
    
    
    
    The required unit for TCPBUFSIZE is bytes, so you would set it to a value of
    1000000.

The maximum socket buffer size for non-Windows systems is usually limited by
default. Ask your system administrator to increase the default value on the
source and target systems so that Oracle GoldenGate can increase the buffer size
configured with TCPBUFSIZE.

Parent topic: Configuring Oracle GoldenGate to Use the Network Efficiently


17.4 ELIMINATING DISK I/O BOTTLENECKS

I/O activity can cause bottlenecks for both Extract and Replicat.

 * A regular Extract generates disk writes to a trail and disk reads from a data
   source.

 * A data pump and Replicat generate disk reads from a local trail.

 * Each process writes a recovery checkpoint to its checkpoint file on a regular
   schedule.

 * Improving I/O performance Within the System Configuration
   
 * Improving I/O Performance Within the Oracle GoldenGate Configuration
   

Parent topic: Tuning the Performance of Oracle GoldenGate

17.4.1 IMPROVING I/O PERFORMANCE WITHIN THE SYSTEM CONFIGURATION

If there are I/O waits on the disk subsystems that contain the trail files, put
the trails on the fastest disk controller possible.

Check the RAID configuration. Because Oracle GoldenGate writes data
sequentially, RAID 0+1 (striping and mirroring) is a better choice than RAID 5,
which uses checksums that slow down I/O and are not necessary for these types of
files.

Parent topic: Eliminating Disk I/O Bottlenecks

17.4.2 IMPROVING I/O PERFORMANCE WITHIN THE ORACLE GOLDENGATE CONFIGURATION

You can improve I/O performance by making configurations changes within Oracle
GoldenGate. Try increasing the values of the following parameters.

 * Use the CHECKPOINTSECS parameter to control how often Extract and Replicat
   make their routine checkpoints.
   
   Note:
   
   CHECKPOINTSECS is not valid for an integrated Replicat on an Oracle database
   system.

 * Use the GROUPTRANSOPS parameter to control the number of SQL operations that
   are contained in a Replicat transaction when operating in its normal mode.
   Increasing the number of operations in a Replicat transaction improves the
   performance of Oracle GoldenGate by reducing the number of transactions
   executed by Replicat, and by reducing I/O activity to the checkpoint file and
   the checkpoint table, if used. Replicat issues a checkpoint whenever it
   applies a transaction to the target, in addition to its scheduled
   checkpoints.
   
   Note:
   
   GROUPTRANSOPS is not valid for an integrated Replicat on an Oracle database
   system, unless the inbound server parameter parallelism is set to 1.

 * Use the EOFDELAY or EOFDELAYCSECS parameter to control how often Extract, a
   data pump, or Replicat checks for new data after it has reached the end of
   the current data in its data source. You can reduce the system I/O overhead
   of these reads by increasing the value of this parameter.

Note:

Increasing the values of these parameters improves performance, but it also
increases the amount of data that must be reprocessed if the process fails. This
has an effect on overall latency between source and target. Some testing will
help you determine the optimal balance between recovery and performance.

Parent topic: Eliminating Disk I/O Bottlenecks


17.5 MANAGING VIRTUAL MEMORY AND PAGING

Because Oracle GoldenGate replicates only committed transactions, it stores the
operations of each transaction in a managed virtual-memory pool known as a cache
until it receives either a commit or a rollback for that transaction. One global
cache operates as a shared resource of an Extract or Replicat process. The
Oracle GoldenGate cache manager takes advantage of the memory management
functions of the operating system to ensure that Oracle GoldenGate processes
work in a sustained and efficient manner. The CACHEMGR parameter controls the
amount of virtual memory and temporary disk space that is available for caching
uncommitted transaction data that is being processed by Oracle GoldenGate.

When a process starts, the cache manager checks the availability of resources
for virtual memory, as shown in the following example:

Copy
CACHEMGR virtual memory values (may have been adjusted)CACHESIZE: 32GCACHEPAGEOUTSIZE (normal): 8M PROCESS VM AVAIL FROM OS (min): 63.97GCACHESIZEMAX (strict force to disk): 48G



If the current resources are not sufficient, a message like the following may be
returned:

Copy
2013-11-11 14:16:22 WARNING OGG-01842 CACHESIZE PER DYNAMIC DETERMINATION (32G) LESS THAN RECOMMENDED: 64G (64bit system)vm found: 63.97GCheck swap space. Recommended swap/extract: 128G (64bit system).



If the system exhibits excessive paging and the performance of critical
processes is affected, you can reduce the CACHESIZE option of the CACHEMGR.
parameter. You can also control the maximum amount of disk space that can be
allocated to the swap directory with the CACHEDIRECTORY option. For more
information about CACHEMGR, see Reference for Oracle GoldenGate.

Parent topic: Tuning the Performance of Oracle GoldenGate


17.6 OPTIMIZING DATA FILTERING AND CONVERSION

Heavy amounts of data filtering or data conversion add processing overhead. The
following are suggestions for minimizing the impact of this overhead on the
other processes on the system.

 * Avoid using the primary Extract to filter and convert data. Keep it dedicated
   to data capture. It will perform better and is less vulnerable to any process
   failures that result from those activities. The objective is to make certain
   the primary Extract process is running and keeping pace with the transaction
   volume.

 * Use Replicat or a data-pump to perform filtering and conversion. Consider any
   of the following configurations:
   
   * Use a data pump on the source if the system can tolerate the overhead. This
     configuration works well when there is a high volume of data to be
     filtered, because it uses less network bandwidth. Only filtered data gets
     sent to the target, which also can help with security considerations.
   
   * Use a data pump on an intermediate system. This configuration keeps the
     source and target systems free of the overhead, but uses more network
     bandwidth because unfiltered data is sent from the source to the
     intermediate system.
   
   * Use a data pump or Replicat on the target if the system can tolerate the
     overhead, and if there is adequate network bandwidth for sending large
     amounts of unfiltered data.

 * If you have limited system resources, a least-best option is to divide the
   filtering and conversion work between Extract and Replicat.

Parent topic: Tuning the Performance of Oracle GoldenGate


17.7 TUNING REPLICAT TRANSACTIONS

Replicat uses regular SQL, so its performance depends on the performance of the
target database and the type of SQL that is being applied (inserts, versus
updates or deletes). However, you can take certain steps to maximize Replicat
efficiency.

 * Tuning Coordination Performance Against Barrier Transactions
   
 * Applying Similar SQL Statements in Arrays
   
 * Preventing Full Table Scans in the Absence of Keys
   
 * Splitting Large Transactions
   
 * Adjusting Open Cursors
   
 * Improving Update Speed
   
 * Set a Replicat Transaction Timeout
   

Parent topic: Tuning the Performance of Oracle GoldenGate

17.7.1 TUNING COORDINATION PERFORMANCE AGAINST BARRIER TRANSACTIONS

In a coordinated Replicat configuration, barrier transactions such as updates to
the primary key cause an increased number of commits to the database, and they
interrupt the benefit of the GROUPTRANSOPS feature of Replicat. When there is a
high number of barrier transactions in the overall workload of the coordinated
Replicat, using a high number of threads can actually degrade Replicat
performance.

To maintain high performance when large numbers of barrier transactions are
expected, you can do the following:

 * Reduce the number of active threads in the group. This reduces the overall
   number of commits that Replicat performs.

 * Move the tables that account for the majority of the barrier transactions,
   and any tables with which they have dependencies, to a separate coordinated
   Replicat group that has a small number of threads. Keep the tables that have
   minimal barrier transactions in the original Replicat group with the higher
   number of threads, so that parallel performance is maintained without
   interruption by barrier transactions.

 * (Oracle RAC) In a new Replicat configuration, you can increase the PCTFREE
   attribute of the Replicat checkpoint table. However, this must be done before
   Replicat is started for the first time. The recommended value of PCTFREE is
   90.

Parent topic: Tuning Replicat Transactions

17.7.2 APPLYING SIMILAR SQL STATEMENTS IN ARRAYS

Use the BATCHSQL parameter to increase the performance of Replicat. BATCHSQL
causes Replicat to organize similar SQL statements into arrays and apply them at
an accelerated rate. In its normal mode, Replicat applies one SQL statement at a
time.

When Replicat is in BATCHSQL mode, smaller row changes will show a higher gain
in performance than larger row changes. At 100 bytes of data per row change,
BATCHSQL has been known to improve the performance of Replicat by up to 300
percent, but actual performance benefits will vary, depending on the mix of
operations. At around 5,000 bytes of data per row change, the benefits of using
BATCHSQL diminish.

The gathering of SQL statements into batches improves efficiency but also
consumes memory. To maintain optimum performance, use the following BATCHSQL
options:

Copy
BATCHESPERQUEUE 
BYTESPERQUEUE 
OPSPERBATCH 
OPSPERQUEUE 



As a benchmark for setting values, assume that a batch of 1,000 SQL statements
at 500 bytes each would require less than 10 megabytes of memory.

You can use BATCHSQL with the BATCHTRANSOPS option to tune array sizing.
BATCHTRANSOPS controls the maximum number of batch operations that can be
grouped into a transaction before requiring a commit. The default for
non-integrated Replicat is 1000. The default for integrated Replicat is 50. If
there are many wait dependencies when using integrated Replicat, try reducing
the value of BATCHTRANSOPS. To determine the number of wait dependencies, view
the TOTAL_WAIT_DEPS column of the V$GG_APPLY_COORDINATOR database view in the
Oracle database.

See Reference for Oracle GoldenGate for additional usage considerations and
syntax.

Parent topic: Tuning Replicat Transactions

17.7.3 PREVENTING FULL TABLE SCANS IN THE ABSENCE OF KEYS

If a target table does not have a primary key, a unique key, or a unique index,
Replicat uses all of the columns to build its WHERE clause. This is,
essentially, a full table scan.

To make row selection more efficient, use a KEYCOLS clause in the TABLE and MAP
statements to identify one or more columns as unique. Replicat will use the
specified columns as a key. The following example shows a KEYCOLS clause in a
TABLE statement:

Copy
TABLE hr.emp, KEYCOLS (FIRST_NAME, LAST_NAME, DOB, ID_NO);



For usage guidelines and syntax, see the TABLE and MAP parameters in Reference
for Oracle GoldenGate.

Parent topic: Tuning Replicat Transactions

17.7.4 SPLITTING LARGE TRANSACTIONS

If the target database cannot handle large transactions from the source
database, you can split them into a series of smaller ones by using the Replicat
parameter MAXTRANSOPS. See Reference for Oracle GoldenGate for more information.

Note:

MAXTRANSOPS is not valid for an integrated Replicat on an Oracle database
system.

Parent topic: Tuning Replicat Transactions

17.7.5 ADJUSTING OPEN CURSORS

The Replicat process maintains cursors for cached SQL statements and for SQLEXEC
operations. Without enough cursors, Replicat must age more statements. By
default, Replicat maintains as many cursors as allowed by the MAXSQLSTATEMENTS
parameter. You might find that the value of this parameter needs to be
increased. If so, you might also need to adjust the maximum number of open
cursors that are permitted by the database. See Reference for Oracle GoldenGate
for more information.

Parent topic: Tuning Replicat Transactions

17.7.6 IMPROVING UPDATE SPEED

Excessive block fragmentation causes Replicat to apply SQL statements at a
slower than normal speed. Reorganize heavily fragmented tables, and then stop
and start Replicat to register the new object ID.

Parent topic: Tuning Replicat Transactions

17.7.7 SET A REPLICAT TRANSACTION TIMEOUT

Use the TRANSACTIONTIMEOUT parameter to prevent an uncommitted Replicat target
transaction from holding locks on the target database and consuming its
resources unnecessarily. You can change the value of this parameter so that
Replicat can work within existing application timeouts and other database
requirements on the target.

TRANSACTIONTIMEOUT limits the amount of time that Replicat can hold a target
transaction open if it has not received the end-of-transaction record for the
last source transaction in that transaction. By default, Replicat groups
multiple source transactions into one target transaction to improve performance,
but it will not commit a partial source transaction and will wait indefinitely
for that last record. The Replicat parameter GROUPTRANSOPS controls the minimum
size of a grouped target transaction.

The following events could last long enough to trigger TRANSACTIONTIMEOUT:

 * Network problems prevent trail data from being delivered to the target
   system.

 * Running out of disk space on any system, preventing trail data from being
   written.

 * Collector abends (a rare event).

 * Extract abends or is terminated in the middle of writing records for a
   transaction.

 * An Extract data pump abends or is terminated.

 * There is a source system failure, such as a power outage or system crash.

See Reference for Oracle GoldenGate for more information.

Parent topic: Tuning Replicat Transactions


17.8 USING HEALTHCHECK SCRIPTS TO MONITOR AND TROUBLESHOOT

Oracle GoldenGate Healthcheck script provides database site information for
Oracle Databases to allow monitoring and troubleshooting.

The Healtcheck script gathers all replication related configuration and
performance information from a database in one single run. Within the scripts,
there are many queries regarding the database instance and the database specific
information from Extract and Replicat. You can run the script periodically to
obtain the latest database side performance information regarding replication.

The output is one of the key information that is needed for support for a
qualitative analysis of the replication environment.

Topics:

 * Installing, Running, and Uninstalling Healthcheck Scripts
   The Healthcheck script is available for Oracle GoldenGate Classic and
   Microservices.
 * How to Deal with Healthcheck Information?
   
 * Components of Healthcheck Information
   The Healthcheck script generates an HTML file with JSON objects and HTML
   code, which you can view using a web browser.

Parent topic: Tuning the Performance of Oracle GoldenGate

17.8.1 INSTALLING, RUNNING, AND UNINSTALLING HEALTHCHECK SCRIPTS

The Healthcheck script is available for Oracle GoldenGate Classic and
Microservices.

The Healtcheck directory contains three files to install, run and deinstall the
Healthcheck script. Once the PL/SQL package is installed with ogghc_install.sql,
you can frequently run the ohgghc_run.sql script to generate an output file. You
can deinstall the Healthcheck script with the ogghc_deinstall.sql script.

The Healthcheck script is located in:

 * $OGG_HOME/lib/sql/healthcheck for MA

 * $OGG_HOME/healthcheck for CA

To gather information, Oracle recommends you to install and run the Healthcheck
as a SYS user. However, you can also install and run the script as an Oracle
GoldenGate Administration User. In this case, some system information is not
available. The Healthcheck output displays the information that requires SYS
privileges.

Parent topic: Using Healthcheck Scripts to Monitor and Troubleshoot

17.8.2 HOW TO DEAL WITH HEALTHCHECK INFORMATION?

The output file of the Healthcheck script contains the instance name and a
timestamp. By default, information about Integrated Extract and Replicat is
gathered. However, you can retrieve information from the legacy Oracle
GoldenGate schema or database profile. For this reason, you have to take out the
argument of the EXCLUDE_TAG parameter.

Depending of the amount of information being queried, the run time of the script
varies (in minutes).

You can eliminate a query that takes too long to process using the Healthcheck
script and run another query in a parallel session to get the output.

Parent topic: Using Healthcheck Scripts to Monitor and Troubleshoot

17.8.3 COMPONENTS OF HEALTHCHECK INFORMATION

The Healthcheck script generates an HTML file with JSON objects and HTML code,
which you can view using a web browser.

The output of the Healthcheck script contains the following sections:

 * Overview

 * Extract

 * Replicat

 * Table Statistics and Errors

 * Tools

 * Report Map (Legacy)

Each of these sections contain menus and sub-menus depending on the type of data
available.

The following table describes the sections and the data available in those
sections based on the query used to generate the Healthcheck output.

Menu Description

Overview

The Overview section contains information about the following:

 * General Findings

 * Database, Extract and Replicat Summary

 * Capture Parameters

 * Apply Parameters

Database

Main Menu (The Main menu already contains the query information)

Database Objects

This sub-menu option displays information about the following:

 * Tables Not Supported By Oracle GoldenGate Integrated Capture

 * Instantiation SCNs for Apply Schema and Database (DDL)

Database Details

This sub-menu has a detailed database related various connections and services
along with key Oracle GoldenGate parameters and the manual or modified database
parameters. The second part shows the basic information about the components,
software and patch level of the database. The information is distributed amongst
the following sections.

 * Connection Information

 * Key init.ora parameters

 * Database dictionary and fixed table statistics

 * Software edition

 * Registry information including History

 * Replication bundled patch information

 * NLS Database parameters

 * Registered log files for capture

 * Current Database incarnation

 * Standby redo longs

 * GoldenGate Administrator User

Replication SQL Analysis

This section provides a complete log of Oracle GoldenGate related information
from the session at run time of the script and active session history. This
contains complete details about Waits, Events, IO, Contention and SQL.

Objects Instantiation

This sub-menu provides details about the schemas and table-level supplemental
logging for Oracle GoldenGate:

 * Schemas prepared for capture

 * Table-level supplemental log groups enabled for capture

Extract

Main Menu

Extract Details

The Extract details covered in this sub-menu are:

 * Capture Runtime Information

 * Capture Transaction Processing

 * Capture Processing Information

 * Logminer Session Statistics

 * Capture Rules & Rulesets

Extract Performance

This sub-menu displays information about the Extract performance depending on
the type of Extract being used. It displays the progress of the Extract which
includes the following details:

 * Capture Name - Name of the Extract

 * Client Name - Name of the client where the Extract is running

 * Client Status - Status of the client where the Extract is running

 * Processed Low SCN - Processed SCN value of the Extract

 * Oldest SCN - Oldest SCN value of the Extract

Extract Logminer

This sub-menu contains information mainly used for debugging issues.

Replicat

Main Menu

Replicat Performance

The Replicat performance provides the following information:

 * Apply Progress

 * GoldenGate Inbound Progress Table

 * Information about Apply Progress table

 * Apply Network Receiver (ANR)

 * Apply Reader

 * Apply Reader - Dequeue Information

 * Apply Coordinator

 * Apply Coordinator Watermarks

 * Open GoldenGate Apply Transactions

 * Open GoldenGate Apply Transactions -Details

 * Apply Server Transactions ordered by Server_id

 * Apply Server Statstics - Summary

 * Apply Server Statistics - Details

 * Apply Server Statistics - Auto Tuning

 * Apply Server Wait Events

 * Apply Server Session Events

 * Apply Reader Processes

 * Apply Coordinator Processes

 * Apply Server Processes

CDR

This sub-menu provides a detailed log of the Replicat error handlers.

Apply Handler

This sub-menu contains information about the Replicat name, DDL handler, and
precommit handler.

Error Management

Main Menu

Error Management Details This sub-menu has details about the Oracle GoldenGate
table statistics sorted by table. It includes information such as server name,
source table owner, source table name, destination table owner, destination
table name, total operations, inserts, updates, deletes, insert/update/delete
collisions, REPERROR discards, REPERROR ignores, WAIT dependencies, and CDR
related updates.

Tools

Main Menu

History

This sub-menu is dependent on the type of query you have run and displays
subscriber history, Extract, and Replicat history.

Report Map

Main Menu (Legacy).

It provides information on all the queries, which includes details such as:

 * Run time of the query

 * Number of Returned Rows

 * Information if the query has succeeded or failed

 * Information if this is internal information that is only visible if the
   script is run as SYS

 * Disabled Queries

The hyperlinks directs you to the appropriate query.

Hints/Description

This sub-menu is a map of all the activities logged in the Healthcheck report.

Alerts

This sub-menu provides a log of the alerts from the general findings about the
database, Extract, and Replicat along with general system information.

Truncates

This sub-menu displays the Oracle GoldenGate related information from the
V$ACTIVE_SESSSION_HISTORY and V$SQLAREA database views.

Note:

You can view only partial results for V$ACTIVE_SESSSION_HISTORY , as the
original size of the query exceeds the maximum limit.
Config
This sub-menu allows you to add rules to the following sections that are
available on this page:

 * Column Rules: There can be any number of rules on a single table and they
   will be applied one after the other in the order they appear in the rules
   table. It can be an expression using the values of the columns in a row.

 * Menu Items Exclusions: This option is used to exclude any menu item from the
   Healthcheck output. Click Add to set the rule.

 * Group By: This section is used to group by keys and aggregates. You can go to
   any statistic and click G to add a group by sorting.

JS Errors

Main Menu. This page displays debugging information for this framework.

Note:

It is visible only in case of errors.

Parent topic: Using Healthcheck Scripts to Monitor and Troubleshoot

 * Previous
 * Table of contents
 * Next


Was this page helpful?
Was this page helpful? Yes No
 * © Oracle
 * About Oracle
 * Contact Us
 * 
 * Products A-Z
 * Terms of Use & Privacy
 * Cookie-Einstellungen
 * Ad Choices
 * 
 * © Oracle

 * Previous
 * Table of contents
 * Next

×
Video