- Continuent -

***NOTE***: Continuent Tungsten Replicator has moved to code.google.com


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 
www.continuent.com

This document is written for Tungsten Replicator version 1.0.3.

Document issue 1.0

Table of Contents

1 Introducing Tungsten Replicator
1.1 Replication Concepts
1.1.1 What is Database Replication?
1.1.2 How Does Replication Work?
1.1.3 Built-in versus External Replication
1.1.4 Log- versus Trigger-Based Replication
1.2 Tungsten Replicator Architecture
1.3 MySQL Replication
2 Installation and Configuration for MySQL
2.1 Installation Prerequisites
2.2 Setting Up a User Account
2.3 Preparing the Database
2.4 Installing and Configuring Tungsten Replicator
2.5 Setting Up a Simple Master/Slave Configuration
2.6 MySQL Database Housekeeping
2.6.1 MySQL Server Parameter Settings
2.6.2 Statement versus Row Replication in MySQL 5.1
2.6.3 Handling SQL Mode Settings
2.6.4 Truncating Binlogs
2.6.5 Avoiding Binlog Corruption
3 Basic Principles of Operation
3.1 The Tungsten Replicator Process
3.1.1 Replication Roles
3.1.2 Replication States
3.1.3 Replication Catalogs
3.1.4 Static Properties and the replicator.properties File
3.1.5 Dynamic Properties
3.1.6 Transaction History Log
3.1.7 Extractors
3.1.8 Appliers
3.1.9 Filters
3.2 Backup and Restore
3.2.1 Overview of Backups and Backup Storage
3.2.2 Backup Configuration
3.2.3 Running Backup and Restore Commands
3.2.4 Storage Organization and Management
3.3 Master Failover
3.4 Provisioning New Slaves
3.5 Purging the Transactional History Log
3.6 Event Checksums
3.7 Consistency Checking
3.7.1 Overview
3.7.2 Invoking Consistency Checks
3.7.3 Configuration
3.8 Monitoring and Management APIs
3.8.1 JMX/MBean Interface Architecture
3.8.2
3.8.3 JMX Clients for Tungsten Replicator
4 Advanced Principles of Operation
4.1 Implementing Automated Failover
4.2 Fast Database Upgrade and Migration
4.3 Heterogeneous Replication
4.3.1 Replication between Different Database Types
4.3.2 Replication between Databases and Non-Databases
4.4 Rapid Read Scaling in Virtual Environments and Clouds
4.5 Real-Time Data Warehouse Loading
4.6 The Tip of the Iceberg
5 Troubleshooting and Tuning Tungsten Replicator
5.1 Error Handling
5.2 Out of Memory Errors
5.3 Tuning Replicator Performance
5.3.1 Apply-Side Event Caching
5.3.2 Apply-Side Block Commit
5.3.3 Master Connection Reset Period
5.4 Skipping a Failed SQL Update on the Slave
5.5 Running out of Disk Space
5.6 Handling Data Inconsistencies
5.7 Dealing with Failed Slaves
5.8 Dealing with a Failed Master
5.9 Database Failure
5.10 Re-initializing Tungsten Replicator State
6 Command Reference Guide
6.1 Running Tungsten Replicator from the Command Line Interface
6.2 Running Tungsten Replicator as an Operating System Service
6.3 Controlling a Running Tungsten Replicator Process
6.4 Replicator THL Utility
6.4.1 THL Utility Global Options
6.4.2 THL Utility Commands
7 Extending the Tungsten Replicator System
7.1 The ReplicatorPlugin Interface
7.2 Replicator Plug-In Life Cycle
7.3 Plug-In Setter Conventions
7.4 Logging from Plug-Ins
7.5 Advice on Writing Plug-Ins
A Tungsten Replicator Catalogs
A.1 history
A.2 consistency
B Tungsten Global Properties
C Replicator Plug-ins
C.1 Transaction History Log (THL) Storage
C.1.1 THL JDBC Storage Plug-In
C.2 Extractors
C.2.1 MySQL Extractor
C.3 Filters
C.3.1 Case Mapping Filter
C.3.2 Database Transform Filter
C.3.3 Logger Filter
C.3.4 Time-Delay Filter
C.4 Appliers
C.4.1 MySQL Applier
D Documentation Conventions

List of Figures

1.1 Replication Benefits
1.2 Master/Slave Replication
1.3 Replication Architecture
1.4 MySQL Replication Architecture
3.1 Replicator Commands and States
3.2 Replication Catalogs
3.3 Provisioning a New Slave from a Donor
3.4 High-Level Management Architecture
4.1 Automated Failover with Heartbeat
4.2 Fast Upgrade and Migration

List of Tables

2.1 Minimum MySQL Server Parameters for Tungsten Replicator
7.1 Replicator Plug-In Types
A.1 history Table
A.2 consistency Table
B.1 Global Properties
C.1 JDBC THL Storage Implementation Class
C.2 JDBC THL Properties
C.3 MySQL Extractor Implementation Class
C.4 MySQL Extractor Properties
C.5 Database Transform Filter Implementation Class
C.6 Database Transform Filter Properties
C.7 Database Transform Filter Implementation Class
C.8 Database Transform Filter Properties
C.9 Logger Filter Implementation Class
C.10 Logger Filter Properties
C.11 Time Delay Filter Implementation Class
C.12 Time Delay Filter Properties
C.13 MySQL Applier Implementation Class
C.14 MySQL Applier Properties
D.1 Typographic Conventions

Tungsten Replicator provides master/slave replication. It has a pluggable architecture and it supports a multiplicity of database management systems. The following chapters provide more detailed information on Tungsten Replicator.

The basic concepts of the Tungsten Replicator system are explained in the following chapters.

Database replication is a highly flexible technology for copying updates automatically between databases. The idea is that if you make a change to one database, other database copies update automatically. Replication occurs at the database level and does not require any special actions from client applications.

Propagating updates automatically is a simple idea, but it helps solve a surprisingly large number of problems. See below for a summary figure of solution examples. The examples are also explained into more detail - clockwise - in the list below the figure.


  1. Availability. Keeping multiple copies of data is one of the most effective ways to avoid database availability problems. If one database fails, you can switch to another local copy or even to a copy located on another site.

  2. Cross-site database operation. Applications like credit card processing use multiple open databases on different sites, so that there is always a database available for transactions. Replication can help transfer copies between distributed databases or send updates to a centrally located copy.

  3. Scaling. Replicated copies are live databases, so you can use them to distribute read traffic. For example, you can run backups or reports on a replica without affecting other copies.

  4. Upgrades. Replication allows users to upgrade a replica, which can then be switched over to the master copy. This is a classic technique to minimize downtime as well as provide a convenient back-out in the event of problems.

  5. Heterogeneous database integration. It is quite common for data to be entered in one database type, such as Oracle, and used in another, such as MySQL. Replication can copy data between databases and perform transformations necessary to ensure proper conversion.

  6. Data warehouse loading. Replication applies updates in real-time, which is very useful as databases become too large to move using batch processes. Data warehouse loading is much easier with capabilities such as transforming data or copying updates to a central location.

  7. Geographic distribution. Replication allows users to place two or more clusters in geographically separated location to protect against site failure or site unreachability.

It is not surprising that database replication is considered essential technology to build and operate a wide variety of business-critical applications. Tungsten Replicator is designed to solve the problems described above as well as many others.

Tungsten Replicator uses master/slave replication. In master/slave replication, updates are handled by one database server, known as the master, and propagated automatically to replicas, which are known as slaves. This is a very efficient way to make database copies and keep them up to date as they change.

Master/slave replication is based on a simple idea. Let's assume that two databases start with the same initial data, which we call a snapshot. We make changes on one database, recording them in order so that they can be replayed with exactly the same effect as the original changes. We call this a serialized order. If we replay the serialized order on the second database, we have master/slave replication.


Master/slave replication is popular for a number of reasons. First, databases can generate and reload snapshots relatively efficiently using backup tools. Second, databases not only serialize data very quickly but also write it into a file-based log that can be read by external processes. Master/slave application is therefore reasonably tractable to implement, even though the effort to do it well is not small.

Master/slave replication has a number of benefits for users. It runs very quickly, places few limitations on user SQL, and works well over high latency network connections typical of wide area networks (WAN). Also, as Tungsten Replicator demonstrates, this type of replication does not require any changes to the database server itself, which means that it works with off-the-shelf databases.

Master/slave replication also has some disadvantages. First, the master is a single point of failure. Special procedures are necessary to handle this and keep systems available. Second, slaves tend to lag the master. This is due to the fact that masters can typically process updates faster than they can be replicated and applied to slaves.

Tungsten Replicator is designed to minimize drawbacks of the approach, for example by handling master failover correctly or providing mechanisms to help boost speed of updates on replicas. It also has features like data filtering and transformation, which make it better able to handle problems like heterogeneous data integration for which master/slave replication is uniquely suited.

Tungsten Replicator is a process that runs on every host in the cluster and implements replication as described in the previous sections. The figure below depicts the replication architecture:


The components in the figure are:

  • Master DBMS - The Database Management System (DBMS), which acts as the master for the replication system. The master role can change, and any DBMS can be potentially elected as the master for the replication.

  • Slave DBMS - The slave DBMS receives replication events from the master DBMS and applies them. There can be any number of slaves in the replication system. Slaves are also commonly known as replicas.

  • Replication Event Extractor - The replication event extractor extracts replication events from the master DBMS logs. Events are either SQL statements or rows from the replicated database.

  • Transaction History Log - The transaction history log provides persistent storage for replication events and communications with other transaction history logs in the cluster.

  • Replication Event Applier - The replication event applier applies the replication events into the slave DBMS.

  • Node Manager - Node manager refers to the manager for Tungsten components running either on the slave or master node. Node manager connects to the Tungsten service manager at the upper level.

Tungsten Replicator architecture is very flexible and allows addition of new extractors and appliers. Addition of new databases is quite straightforward. It also allows users to implement creative new uses for replication, such as reading data from a database and applying it to an application rather than a database, or replicating from an application to a database. Extending Tungsten Replicator is discussed in Chapter 7, Extending the Tungsten Replicator System.

This chapter explains how to install and configure Tungsten Replicator for MySQL. It is assumed that each Tungsten Replicator instance runs on a separate database node.

Tungsten Replicator is written in Java and requires Sun JDK 1.5 or above. Before installing the replicator, you should obtain and install the JDK from http://java.sun.com. Download and install the full JDK.

When the JDK is correctly installed you should be able to run java -version from the command line and see output like the following:

Java version "1.5.0_12"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_12-b04)
Java HotSpot(TM) Client VM (build 1.5.0_12-b04, mixed mode, sharing)
[Warning]Warning

RedHat Linux and distributions derived from them may include the GNU Java, which has very different command line flags and behaves differently from Sun JDK versions. This version of Java can cause serious confusion if it somehow gets in the execution path. We recommend you remove it unless there is a compelling reason for it to be present on your hosts.

To install Tungsten Replicator, proceed as follows:

  1. Login with the continuent account use to run the Tungsten Replicator. This will ensure that all files are owned by the correct login. You must also use this account to start the replicator.

    Copy the distribution archive to the database nodes and unpack at the location of your choice. In the following code examples, we will use this location as the default directory.

    On Linux, Solaris, MacOS X, and other Unix variants we recommend installing in directory /opt/continuent. On Windows, use for example the C:\Program Files directory.

    [Note]Note

    If you use Windows and cannot unpack the .zip distribution archive, try installing another file compression program, such as 7-zip. You can also use the jar program distributed with the Java JDK.

  2. Configure Tungsten Replicator instances.

    1. In the unpacked distribution, cd to the conf directory and copy file replicator.properties.mysql to replicator.properties. Here are sample commands for Linux and Solaris.

      cd conf
      cp replicator.properties.mysql replicator.properties
      chmod 700 replicator.properties
                    

      [Warning]Warning

      The replicator.properties file contains passwords. To ensure security it should be owned by the continuent account and have restricted permissions.

    2. Edit replicator.properties and set properties required by the replicator. The following list summarizes the main properties that need to change.

      The following sample shows standard parameters for the replicator.properties file.

      #################################
      # REPLICATOR.PROPERTIES.MYSQL   #
      #################################
      #
      # This file contains properties for MySQL replication.
      
      # Replicator role.  Uncomment one of the choices of master or slave.  
      # There is no default for this value--it must be set or the replicator 
      # will not go online.  
      replicator.role=slave
      #replicator.role=slave
      
      # Replicator auto-enable.  If true, replicator automatically goes online 
      # at start-up time. 
      replicator.auto_enable=false
      
      # Source ID. This required parameter is used to identify replication
      # event source.  It must be unique for each replicator node.
      replicator.source_id=sol10b
      
      # Schema to store Replicator catalog tables. 
      replicator.schema=tungsten
      
      # Event checksum algorithm.  Possible values are 'SHA' or 'MD5' or 
      # empty for no checksums
      replicator.event.checksum=md5
      
      # Extractor selection.  Value must be a logical name.  
      replicator.extractor=mysql
      
      # Pre-storage filter selection.  Value must be one or more comma-separated
      # logical filter names. 
      #replicator.prefilter=logger
      
      # Post-storage filter selection.  Value must be one or more comma-separated
      # logical filter names. 
      #replicator.postfilter=logger
      
      # Applier selection.  Value must be a logical name whose parameters are 
      # defined below. 
      replicator.applier=mysql
      
      # Information regarding the RMI port to use
      # for advertising JMX services
      replicator.rmi_port=10000
      
      #################################
      # TRANSACTION HISTORY LOG (THL) #
      #################################
      
      # Address of this replicator's URI.  
      replicator.thl.uri=thl://0.0.0.0/
      
      # Master URI address.  For a master this points to the same address as 
      # replicator.thl.uri.  For a slave this points to the master replicator's
      # value.  
      replicator.thl.remote_uri=thl://centos5a/
      
      # Parameters for configuring THL.  These must be fully filled out.  If you 
      # use a database name other than 'tungsten' you must change property
      # replicator.schema to match. (JIRA TREP-157). 
      replicator.thl.storage=com.continuent.tungsten.replicator.thl.JdbcTHLStorage
      replicator.thl.url=jdbc:mysql://localhost/tungsten
      replicator.thl.user=tungsten
      replicator.thl.password=secret
      
      ##############
      # EXTRACTORS #
      ##############
      
      # MySQL extractor properties.  
      replicator.extractor.mysql=com.continuent.tungsten.replicator.extractor.mysql.MySQLExtractor
      replicator.extractor.mysql.binlog_dir=/opt/mysql/data
      replicator.extractor.mysql.binlog_file_pattern=mysql-bin
      replicator.extractor.mysql.host=localhost
      replicator.extractor.mysql.user=tungsten
      replicator.extractor.mysql.password=secret
      
      ###########
      # FILTERS # 
      ###########
      
      # Dummy filter.  A filter that does nothing. 
      replicator.filter.dummy=com.continuent.tungsten.replicator.filter.DummyFilter
      
      # Logging filter.  Logs each event to current system log. 
      replicator.filter.logger=com.continuent.tungsten.replicator.filter.LoggingFilter
      
      # Database transform filter.  Transforms database names that match the 
      # from_regex are transformed into the to_regex.  
      replicator.filter.dbtransform=com.continuent.tungsten.replicator.filter.DatabaseTransformFilter
      replicator.filter.dbtransform.from_regex=foo
      replicator.filter.dbtransform.to_regex=bar
      
      # Transforms database, table and column names into upper or lower case. In case
      # of statement replication generally it transforms everything except quoted
      # string values.
      replicator.filter.casetransform=com.continuent.tungsten.replicator.filter.CaseMappingFilter
      replicator.filter.casetransform.to_upper_case=true
      
      # JavaScript call out filter. Calls script's prepare(), filter(event) and
      # release() functions. Define multiple filters with different names in case you
      # need to call more than one script.
      replicator.filter.javascript=com.continuent.tungsten.replicator.filter.JavaScriptFilter
      replicator.filter.javascript.script=../samples/extensions/javascript/filter.js
      replicator.filter.javascript.sample_custom_property="Sample value"
      
      # Time delay filter.  Should only be used on slaves, as it delays storage
      # of new events on the master.  The time delay is in seconds. 
      replicator.filter.delay=com.continuent.tungsten.replicator.filter.TimeDelayFilter
      replicator.filter.delay.delay=300
      
      ############
      # APPLIERS #
      ############
      
      # MySQL applier properties.  NOTE:  url_options are extra JDBC options that make
      # the applier behave like a regular client library connection.  For more information
      # see check the replicator or MySQL Connector/J docs. 
      #
      replicator.applier.mysql=com.continuent.tungsten.replicator.applier.MySQLApplier
      replicator.applier.mysql.host=localhost
      replicator.applier.mysql.port=3306
      replicator.applier.mysql.url_options=?jdbcCompliantTruncation=false\
      &zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false\
      &allowMultiQueries=true&yearIsDateType=false 
      replicator.applier.mysql.user=tungsten
      replicator.applier.mysql.password=secret
      
      # How to react on consistency check failure. Possible values are 'stop' or 'warn'. 
      replicator.applier.consistency_policy=stop
      
      # Should consistency check be sensitive to column names and/or types? Settings
      # on a slave must be identical to master's. Values are 'true' or 'false'. 
      
      replicator.applier.consistency_column_names=true
      replicator.applier.consistency_column_types=true
                  

If you have followed these instructions so far and have MySQL installed, you should not need to make any other changes.

This chapter explains how to start Tungsten Replicator and set up a simple master/slave configuration.

[Important]Important

In Linux, Solaris, and Mac OS X, the login used to run Tungsten Replicator must have permissions to read MySQL binlog files. Add the Tungsten Replicator login to the mysql group, which will allow it to read but not write to the logs.

Tungsten Replicator is run and configured by using shell scripts residing in the bin directory.

In Linux, Solaris, and Mac OS X, use the scripts below:

In Windows, use the scripts below:

To start Tungsten Replicator in Linux, Solaris, or Mac OS X, proceed as follows:

  1. Dump the master database and upload it to all slaves in the cluster. For example, issue the following command on the master:

    mysqldump -uuser -ppassword -hmaster_host --all-databases > mydump.sql

    On the slave:

    mysql -uuser -ppassword -hslave_host < mydump.sql

    [Note]Note

    On Debian based distributions, you may have to copy the password value in /etc/mysql/debian.cnf from the master to the slave after taking a dump. Otherwise MySQL scripts will not work.

    [Tip]Tip

    Slaves may also be provisioned using the built-in backup capability of Tungsten Replicator. See Section 3.4, “Provisioning New Slaves” for more information.

  2. On master and all slaves, start the Tungsten Replicator process:

    trepstart (or) trepsvc start

  3. trepctl online

To start Tungsten Replicator in Windows, proceed as follows:

[Note]Note

If you set the replicator.auto_enable property to true, the replicator will start automatically without needing to enter trepctl online. This is very handy when running the replicator as a service using trepsvc.

This is all it takes to start Tungsten Replicator master and slaves. You should now have your master and slave Tungsten Replicators running and you can check the replication by making some data changes in the master database and verifying that the changes are reflected in the slave databases.

The use of trepctl/trepctl.bat command is documented in Chapter 6, Command Reference Guide. See also Section 6.2, “Running Tungsten Replicator as an Operating System Service”.

Tungsten Replicator has certain requirements specific to MySQL that must be met for replication to function correctly. This section provides an overview of administrative settings and other information that will ensure smooth operation. For more information please refer to MySQL server documentation at http://www.mysql.com.

Tungsten Replicator has minimal dependencies on MySQL server parameters but does have a few required standard settings. Server parameters are normally set in my.cnf. Location of this file varies by OS platform and distribution; on Linux systems it is commonly located in /etc/my.cnf. Here is an example of the minimum my.cnf parameter setttings for Tungsten Replicator.

[mysqld]
# Master replication settings.
log-bin=mysql-bin
server-id=1
max_allowed_packet=16m
      

The following table summarizes recommended usage of the foregoing parameters.


Additional optional parameters are discussed succeeding sections. For all other parameters please follow standard MySQL recommendations.

The basic operating principles of the Tungsten Replicator are explained in this chapter. Understanding these principles will allow you to set up and manage individual replicator instances.

A Tungsten Replicator instance is an operating system process that manages extracting or applying SQL updates. The Tungsten Replicator instance can function in either of two roles: as a master or as a slave. The master extracts SQL changes from a log and stores them for distribution as replication events. The slave receives replication events and applies them to a target, which is usually but not always a SQL database.

At any given time, the Tungsten Replicator instance usually is either a master or a slave. To perform a particular role the replicator must be correctly configured and must be in the correct state.

To configure a replicator to function as a master, the replicator.thl.remote_uri property in the replicator.properties file should be configured to point to the localhost and provide a port on which it listens as shown below.

replicator.thl.uri=thl://0.0.0.0/
replicator.thl.remote_uri=thl://localhost:20001/

To configure a replicator to function as a slave, the replicator.thl.remote_uri property should be configured to point to the host and port of the master transaction history log (THL). If the master in the previous example were running on host centos5d, then each slave would need to be configured as shown below.

replicator.thl.uri=thl://0.0.0.0/
replicator.thl.remote_uri=thl://centos5d:20001/

These settings must be correctly defined for master and slave to be able to communicate properly.

[Note]Note

Slaves can also function as masters to other slaves. For example, you can set up a configuration in which a slave points to another slave which in turn points to the real master. The slave in the middle is known as a relay slave.

Tungsten Replicator states determine whether the instance functions as a master, slave, or neither. Tungsten Replicator states follow the simple state model shown below. The boxes are states while the arrows are transitions. Transitions are triggered by issuing commands through the trepctl program.


The replicator states and commands to change them are as follows.

  • START. Tungsten Replicator processes automatically enter this state on start-up. They remain in this state until a user issues a configure command.

  • OFFLINE. In this state, the Tungsten Replicator is idle and neither applies nor extracts replication events. The Tungsten Replicator enters this state after successful configuration or following a successful offline command. Users can issue a online command to start replication. The Tungsten Replicator may also enter this state automatically following a fatal replication error.

    OFFLINE has four sub-states, which are listed below.

    • OFFLINE:BACKUP. This state indicates that Tungsten Replicator is performaing a database backup.
    • OFFLINE:ERROR. This state indicates that Tungsten Replicator is off-line following an error of some kind. The error message that caused this is easily accessible and is preserved until a successful administrative command causes it to exit the error state.
    • OFFLINE:NORMAL. This state indicates that Tungsten Replicator is off-line following a normal administrative operation or following a normal start-up.
    • OFFLINE:RESTORE. This state indicates that Tungsten Replicator is performing a database restore.
  • SYNCHRONIZING. The Tungsten Replicator enters this state whenever it is catching up with a master. This normally occurs when a user issues a online command. It also occurs in response to internal events such as when a slave loses its connection to the master. The Tungsten Replicator transitions automatically to the ONLINE when it detects that it is synchronized with the master and ready to begin applying events.

  • ONLINE has two sub-states, which are listed below.

    • ONLINE:SLAVE. In this state Tungsten Replicator is synchronized with the master defined in the replicator.thl.remote_uri property and actively applying events or ready to apply events. Users can promote the Tungsten Replicator to a master only by changing the replicator.role property. This must be done while the replicator is in the OFFLINE state.

    • ONLINE:MASTER. In this state Tungsten Replicator is extracting events using a properly configured and functioning extractor and delivering them to slaves through the port defined in replicator.thl.remote_uri property. Tungsten Replicator enters this state only if the replicator.role property is set to master. Tungsten Replicator can go offline from this state either in response to a shutdown command or due to an internal error.

[Note]Note

The online command sends the replicator in to the SYNCHRONIZING state before bringing it online initially as a slave. It may take some time before this state change is fully complete. Most other state changes are complete when the command returns. You can use the trepctl wait to wait for the replicator to go fully online.

Due to the asynchronous nature of the replication, there is some delay before changes made on the master node will be reflected in the slave nodes. The latency depends on hardware, communication network, and the SQL load profile and should be measured separately for each installation.

Replication catalogs are database tables that the Tungsten Replicator uses to keep track of replication events and manage the replication process. Catalogs are normally stored in the same database server for which the Tungsten Replicator is handling replication, as shown in the following diagram.


Replication catalogs include tables for storing replicated events, consistency checks, and any other information required by the Tungsten Replicator. The Tungsten Replicator creates them automatically at start-up time.

The catalog database is the database that contains the catalog tables. Catalog tables may be stored in any database. By convention they are stored in the tungsten database. The catalog database should be different from the database used to store application data and should not be replicated. Replicating the catalog database will quickly result in corrupt data on slaves as well as replication failures.

The catalog database contents must be coordinated with the contents of tables that are currently being replicated. For example, the THL tables described in Section 3.1.6, “Transaction History Log” must match application data or replication will either miss SQL updates or try to apply them twice. When transferring a snapshot to provision a new Tungsten Replicator instance, the catalog tables are normally included.

The Tungsten Replicator catalog tables are described fully in Appendix A, Tungsten Replicator Catalogs.

[Tip]Tip

Create the catalog database beforehand when first setting up the master. Depending on which DBMS you are using, a missing database may result in a replicator failure on start-up. Once the database is correctly set up it can be replicated to slaves along with application databases.

The replicator.properties file contains static configuration information for Tungsten Replicator. By static we mean that the properties are read once and do not change again unless the file changes and is reread when the replicator restarts or receives a configure command while in the OFFLINE state. The replicator.properties file is located by default in directory conf and must be properly configured for the replicator process to run.

Configuration parameters have a well-defined form that allows for global parameters that apply to the replicator as a whole as well as parameters that are specific to individual plug-ins. The rules are as follows.

Certain Tungsten Replicator properties may be set dynamically using JMX or through the trepctl program. Dynamic properties are remembered across restarts of the replicator process. Dynamic properties are designed to allow remote configuration changes necessary to manage Tungsten Replicator and perform critical operations like failover without physical access to the host on which the replicator process is executing.

Tungsten Replicator stores dynamic properties in file conf/dynamic.properties. When starting or processing a configurecommand , the replicator process first reads static properties in replicator.properties followed by dynamic properties. In this way, dynamic values override the original static values.

Dynamic properties are set using the trepctl set program, as illustrated in the following example. Note that the Tungsten Replicator must be in the OFFLINE state for changes to be accepted.

trepctl offline
trepctl set -name replicator.thl.remote_uri -value thl://guppy/
trepctl online

Dynamic properties are cleared using the trepctl clear. This command resets all properties to their static values and deletes the dynamic.properties file. Like the set it may only run when the replicator process is in the OFFLINE state.

trepctl offline
trepctl clear
trepctl online

There are two further ways to clear dynamic properties. You may remove the dynamic.properties file while the replicator is stopped. You may also start the replicator using the -clear. Both cause Tungsten Replicator to "forget" current dynamic property values and start with original static values instead.

The trepctl show shows current values of all dynamically settable parameters. You may issue this command in any state.

The Transaction History Log or THL is a standard component of every Tungsten Replicator instance. The THL maintains a list of replication events in serial order, which means that if you apply them to a slave database it will result in the same contents as the master.

THL contents are stored in catalog tables. The catalog location is defined by the replicator.thl properties in replicator.properties. These properties are illustrated below.

# Parameters for configuring THL.  These must be fully filled out.
replicator.thl.storage=com.continuent.tungsten.replicator.thl.JdbcTHLStorage
replicator.thl.url=jdbc:mysql://localhost/tungsten
replicator.thl.user=tungsten
replicator.thl.password=secret

The THL storage plug-in is generic and works the same way for all supported database types. The storage implementation class is com.continuent.tungsten.replicator.thl.JdbcTHLStorage as seen in the preceding example. The THL plug-in properties are described full in Section C.1, “Transaction History Log (THL) Storage”.

Replication events are stored in the history table. Each event is characterized by a sequence number (column seqno), which increases with each new transaction that is added to the THL. Sequence number values are identical across all THL copies, which means that if a master and slave have the same maximum sequence number then the slave's THL is fully up to date with the master.

[Warning]Warning

The database or schema name for the THL and other catalog tables is given by the replicator.schema property. This name must match the name of the database in the replicator.thl.url property. This dependency will be removed in a future release.

Each Tungsten Replicator instance must have a configured applier to operate as a slave. The extractor is not used when operating as a master but must be configured because all replicators first start in the SLAVE state.

The applier is selected using the replicator.applier property. This property must supply a logical name as shown in the following example.

# Applier selection.  Value must be a logical name. 
replicator.applier=mysql

Class name and properties for the applier are configured according to the rules for plug-in configuration provided in Section 3.1.4, “Static Properties and the replicator.properties File”. The following example shows property definition for the MySQL applier. As with all plug-ins, unused properties may be omitted. They will assume default values.

# MySQL applier properties.
replicator.applier.mysql= \
com.continuent.tungsten.replicator.applier.MySQLApplier
replicator.applier.mysql.host=localhost
replicator.applier.mysql.port=4306
replicator.applier.mysql.user=tungsten
replicator.applier.mysql.password=secret

Applier properties vary by applier type. Appliers are documented fully in Section C.4, “Appliers”.

Each Tungsten Replicator instance may have 0 or more filters. Filters can drop or transform SQL events, which is very handy for a wide variety of replication use cases.

Filters can be configured and used in two different ways. Pre-storage filters execute on the master after events are extracted and before they are stored in the THL. Post-storage filters execute on the slave after events are retrieved from the THL and before they are applied to a target database. Filter implementation is identical regardless of the role in which they function. The only difference is in configuration.

Filters are selected using properties that correspond to pre-storage or post-storage roles. Each property has a list of 0 or more logical names. Names can be separated by commas or whitespace characters (e.g., a blank space or tab). The following example demonstrates filter configuration.

# Pre-storage filter selection.  Value must be one or more comma-separated
# logical filter names.
replicator.prefilter=logger

# Post-storage filter selection.  Value must be one or more comma-separated
# logical filter names.
replicator.postfilter=logger,dbtransform

Class name and properties for each filter are set according to the rules for plug-in configuration provided in Section 3.1.4, “Static Properties and the replicator.properties File”. Note that all filters use the same replicator.filter prefix regardless of their actual role.

# Logging filter.  
replicator.filter.logger= \
com.continuent.tungsten.replicator.filter.LoggingFilter

# Database transform filter.  
replicator.filter.dbtransform= \
com.continuent.tungsten.replicator.filter.DatabaseTransformFilter
replicator.filter.dbtransform.from_regex=foo
replicator.filter.dbtransform.to_regex=bar

Filter properties vary by filter type. Filters are documented fully in Section C.3, “Filters”.

Backup configuration is controlled by replication.properties. Backup and storage agent plug-ins follow the same general configuration pattern described in Section 3.1.4, “Static Properties and the replicator.properties File”. Here are the main steps to observe when setting configuration values.

  1. List one or more backup agents using the replicator.backup.agents property and configure the corresponding agent properties. Backup agent settings are specific to the type of database and backup/restore mechanism.

  2. Select a default backup agent using the replicator.backup.default property. This agent is used if you do not specify one on the backup command.

  3. In like fashion select one or more storage agents using the replicator.storage.agents property, configure properties, and select a default using the replicator.storage.default property. All storage agents have a retention property. This determines the maximum number of backups retained until old backups are deleted.

[Tip]Tip

When testing replication (for example before deploying into production) it is often useful to define two storage agents that use different locations. The default agent is used for for normal backup and restore. The other agent can hold a base backup to restore system state quickly at the beginning of tests.

[Note]Note

Backup properties are not dynamic. You must reread replicator.properties if you make changes using trepctl configure or by restarting the process.

Before running a backup Tungsten Replicator must be in the OFFLINE state. Depending on the backup agent used, the database may also need to be fully quiesced with no active transactions.

The trepctl has options to run backup and restore tasks. The following example runs a backup using the default backup agent, stores it using the default storage agent, and prints the URI when the backup completes.

$ trepctl backup
Backup completed successfully; \
URI=storage://file-system/store-0000000013.properties
State: OFFLINE:NORMAL
      

The following example selects specific backup and storage agents and waits only 15 minutes before returning. If the backup is not done before the timeout expires the URI is not printed. Instead, you must check the log to find the backup URI.

$ trepctl backup -backup mysqldump -storage fs -limit 900
Backup is pending; check log for status
State: OFFLINE:BACKUP
      

To restore data use the trepctl restore command. As with backups, Tungsten Replicator must be in the OFFLINE state to run a restore command.

The following example runs a backup using the latest backup stored with the default storage agent. Tungsten Replicator automatically determines the backup agent to use by reading the backup metadata and calling the correct agent.

$ trepctl restore
Restore completed successfully
State: OFFLINE:NORMAL
      

Similarly, the following example restores a specific backup by specifying its URI. We also wait up to 20 minutes before returning. As with backups, if the restore is not done before the timeout expires you will not know if the restore succeeded. Instead, you must check the replicator log.

$ trepctl restore -uri storage://file-system/store-0000000013.properties -limit 1200
Restore is pending; check log for status
State: OFFLINE:RESTORE
      
[Tip]Tip
Both backup and restore operations return Tungsten Replicator to the OFFLINE:NORMAL if they succeed. In the event of an error Tungsten Replicator goes into the OFFLINE:ERROR state, which means that failures are easy to detect.

Master failover is the process of switching from an existing master to a new one. Failover is not automatic. Instead, users must execute a series of commands to configure a new master and point slaves to the master. The following procedure describes how to perform a planned failover.

[Note]Note

Commands shown in this section correspond to Unix conventions. Windows commands are analogous but use the Windows scripts.

[Warning]Warning

If you are switching masters due to a failover, do not enable the old master as a slave, as this can lead to data inconsistency errors due to transactions on the master that were not replicated to any slave. See Section 5.8, “Dealing with a Failed Master” for more information. A future release of Tungsten Replicator will provide mechanisms to identify automatically when a master can safely be reused as a slave.

You can add new slaves to a replication configuration at any time. The provisioning procedure has the following steps.

The first time you provision a slave it will be necessary to stop the master completely in order to create a consistent data dump. However, once you have at least one slave available you can use a slave for provisioning instead. We call this kind of slave a donor slave. Donor slaves eliminate the need to stop the master.

The following diagram illustrates use of a donor slave to provision another slave without stopping the master.


Synchronizing data requires database-specific dump and load commands. The following procedure is generic and assumes that you have backup properly configured as described in Section 3.2, “Backup and Restore”. You can also substitute a backup mechanism of your own. The host names are donor and recipient respectively.

  1. Command the donor slave to the OFFLINE state, back it up, and bring it back on line again. If you have properly configured backups this looks like the following:

    trepctl -host donor offline
    trepctl -host donor backup
    trepctl -host donor online
    

    If backups are not configured, you would substitute appropriate database dump commands for trepctl backup. Once this step is complete the donor is not needed further.

    [Warning]Warning

    For this procedure to work properly in all cases you must ensure that the backup is transactionally consistent, i.e., that there are no writes going to the database. This is not normally an issue with slaves but requires quiescing applications if the donor is a master.

  2. Provision the new slave by starting and loading the backup. Again if backups are configured this is a relatively simple procedure as shown below.

    trepstart 
    trepctl -host recipient restore
    trepctl -host recipient online
    

    If backups are not configured, you would substitute appropriate database restore commands. These must run before you start the replicator and bring it online.

The Tungsten Replicator does not automatically purge records from the history catalog table, which comprises the THL. This table therefore grows without bound unless manually truncated.

When purging THL records, it is important to avoid deleting rows corresponding to SQL events that have yet to be replicated to one or more slaves. This applies obviously to the current master database. However, it also applies to slaves used for failover. It is important to avoid a situation where a slave is promoted to master but does not have events required by other slaves that happen to be lagging behind.

The following procedure describes a safe procedure for checking and truncating THL records.

[Warning]Warning

Do not delete all records in the THL or replication restart may fail. You should always leave at least one record.

Tungsten Replicator can compute checksums on SQL events before they are loaded into the THL and again whenever they are applied to a target database. Checksums protect against random errors due to data corruption.

Checksums are controlled by properties in the replicator.properties file. The following sample shows these properties.

# Event checksum algorithm.  Possible values are 'SHA' or 'MD5' or 
# empty for no checksums
replicator.event.checksum=md5

# How to react on consistency check failure  Possible values are 
# 'stop' or 'warn'. 
replicator.applier.consistency_policy=stop

Checksum algorithms must be consistent across master and slave replicators. Also, if no value is selected, checksums are neither computed nor checked.

When checksum checking is in effect, Tungsten Replicator has two possible responses to a failed checksum event based on the consistency policy. If the policy is warn the replicator writes a warning to the replicator log. If the policy is stop, Tungsten Replicator issues an automatic shutdown that puts the replicator into the OFFLINE state.

When Tungsten Replicator stops due to a checksum failure, it is very important to track down the cause of the failure if this can be done. It is possible to continue by setting the consistency policy to warning in replicator.properties. This allows you to get past a failure and continue replication.

[Note]Note

Checksums are highly recommended for production settings as they prevent inscrutable data inconsistencies due to random data corruption during transport and storage, software failures, or administrative errors.

Tungsten Replicator provides properties to control the behavior of consistency checks. The following example shows these properties.

# How to react on consistency check failure. Possible values are 'stop' or 'warn'. 
replicator.applier.consistency_policy=stop

# Should consistency check be sensitive to column names and/or types? Settings
# on a slave must be identical to master's. Values are 'true' or 'false'. 
replicator.applier.consistency_column_names=true
replicator.applier.consistency_column_types=true
      

The consistency_policy is the same property that controls the response to a failed event checksum, as described in Section 3.6, “Event Checksums”. There are also two additional parameters that are helpful for consistency checking across different database types.

  • consistency_column_names - If true, the consistency check ignores differences in the column name case. For example, a column named "address" and a column named "ADDRESS" will be considered equivalent.

  • consistency_column_types - If true, the consistency check ignores differences in column types. For example, this would would allow integer and long values to compare.

Tungsten Replicator implements capable management and monitoring interfaces through JMX. JMX is a standard management API that allows Java processes to expose monitoring data, management commands, and notifications of state changes to external clients. For more information on JMX itself look at the Sun documentation, for example http://java.sun.com/javase/technologies/core/mntr-mgmt/javamanagement/.

Tungsten Replicator separates management and monitoring statistics into separate MBean interfaces.

[Tip]Tip

Tungsten Replicator JMX interfaces often change, though we make every effort to ensure upward compatibility for clients. The Javadoc pages in binary builds or the Java interfaces in source code are the final reference for MBean interface behavior.

Any client that can manipulate JMX interfaces can manage and monitor Tungsten Replicator.

This chapter provides guidance on overall strategy for using Tungsten Replicator in your environment. The techniques described here assume familiarity with Tungsten Replicator as described in Section 3.1, “The Tungsten Replicator Process”. Every system designer or DBA who uses replication should be familiar with these approaches.

[Note]Note
More detailed descriptions will be added in a future version of this manual.

Automatic failover on master failure is a standard technique to ensure database high availability. Automation allows you to elect a new master whenever your current master fails without having to wait for a human to make a decision. It is key to ensuring that databases remain highly available at all times.

The simplest way to implement automated failover is using a master/slave pair with a shared virtual IP address (VIP). The current master host "owns" the virtual IP. Client applications access the database using the virtual IP address rather than connecting directly. This approach makes it easy to shift clients from one database host to another in the event of a failover.

Failover itself is controlled by an external program known as a "cluster manager." The cluster manager is responsible for deciding when it is time to do a failover. For a variety of reasons Tungsten Replicator cannot make this decision for itself. There are many programs that can function as a cluster manager. One of our favorites is Heartbeat, which is available from http://www.linux-ha.org/Heartbeat. Heartbeat is easy to set up and quite reliable.

The following diagram illustrates failover using a master/slave pair. When Heartbeat on the slave detects that the master host is down or not responding, it takes over the VIP and promotes the local slave to be master using the procedure described in Section 3.3, “Master Failover”.


Master/slave pairs are a good choice for basic availability since there is no question which slave should be promoted. If there are multiple slaves you must check the slave positions before failing over as described in Section 5.8, “Dealing with a Failed Master”. For a seat-of-the-pants solution you can write a watch-dog process that monitors the master and then queries each slave to get its position. You then failover using the most advanced slave.

Promoting one of several slaves is actually a complex problem with special cases that can be quite subtle and hence difficult to handle correctly. Continuent is implementing a management framework designed to solve this problem using group communications. Check our community website for more information.

[Warning]Warning

Whatever solution you choose for handling automated failure, be sure to test it thoroughly and regularly. This is the only way to ensure that failover will work correctly when you need it.

Tungsten Replicator has a number of features that make it well-suited for performing database server upgrades as well as application migrations. The basic idea is to upgrade a slave database that is in the OFFLINE state, let it catch up with any missed updates, and then promote it to master. This approach solves a number of difficult problems for DBAs.

Tungsten Replicator has a simple procedure for provisioning slaves and performing clean failover. It also can replicator from newer to older database versions such as MySQL 5.0 to 4.1. Finally, Filters allow users to replicate even back to databases that have schema changes. You can construct filters to drop new columns or even entire updates that will not work when replicating back to an old version.

The setup for upgrade is almost identical to automatic failover, except that a cluster manager is not required.


The basic upgrade procedure is described below.

  1. Set up a master/slave pair and ensure replication is working correctly.

  2. Take the slave OFFLINE and perform the upgrade. If upgrade fails discard the slave and start over.

  3. Bring the slave back online and allow it to catch up with the master. Once the slave is in the SLAVE state it is fully caught up.

  4. Failover from the existing master to the slave.

  5. Bring the old master up as a slave.

You can upgrade the old master at leisure or simply discard it if you do not need it after the upgrade.

If there are also schema changes you may have to insert filters on the slave to alter or discard SQL events that go back to the old master. Depending on the extent of the changes, it may not be practical to replicate back to the master.

The simplest ways to handle the switch-over of client applications range from updating their connection information to using a VIP as described in Section 4.1, “Implementing Automated Failover”. Both of these techniques require applications either to stop or handle broken connections when the failover occurs.

You can also implement fully seamless upgrades though these are much harder and may place constraints on your application design. One technique is to build logic into applications so that they handle a database server reboot transparently. A better approach is to insert a middleware layer like the Tungsten Connector which provides an abstraction layer and makes database restart transparent. Continuent is actively working on solutions that enable seamless failover for master/slave configurations.

[Tip]Tip

Avoid mixing too many things in a single upgrade. It is generally better to proceed by single steps that change only one thing at a time.

[Warning]Warning

Always test upgrades carefully on real data! Using a slave for the upgrade allows you to test repeatedly on production data. It is great to have a backout for a failed upgrade but even better if you don't have to use it.

Tungsten Replicator supports replication between different database types as well as between databases and non-database entities like applications or even flat files. This section provides you with some ideas about how to set up different heterogeneous replication use cases.

Tungsten Replicator can replicate between different database types, since SQL events are essentially generic after they have been extracted. To replicate between different databases, set up a replicator for each server as if you were replicating between databases of the same type. The "from" database must run in the master role, while the "to" database acts as a slave.

For example, you can set up replication between MySQL and Oracle as follows. Install and configure Tungsten Replicator for the MySQL database and run it in the master role. Install and configure Tungsten Replicator for the MySQL database and run it in the slave role.

When replicating between different database types you must be careful what is being replicated. SQL INSERT, UPDATE, and DELETE statements tend to be quite portable. So, for example, you can replicate from a MySQL 5.0 instance using statement replication to an Oracle instance. Beware, however, that SQL functions as well as binary data types tend to be relatively non-portable. Also, DDL statements beyond the simplest CREATE TABLE expressions are rarely at all portable.

SQL portability issues can be solved in at least two ways.

[Note]Note

The current version of the replicator has some temporary limits that affect how easily it can replicate between different database types. The most significant of these is that there is no fully generic JDBC applier that works with any database type. Appliers are currently specific to one database. This limit and others will be removed shortly.

Replication to and from non-databases is not supported in the off-the-shelf replicator. However, such replication is quite easy to implement for anyone with a reasonable understanding of Java and the willingness to write a replicator plug-in, as described in Chapter 7, Extending the Tungsten Replicator System.

A simple example of non-database replication is to morph database changes into XML documents, one per update. To do this you would implement an applier plug-in that takes SQL updates and converts them into your preferred XML format. This applier just needs to be able to read the SQL event data structures and generate XML tags.

[Tip]Tip

If you use row replication, the applier will be much easier to write. If using statement replication, you may need to parse SQL text, which can be a non-trivial undertaking.

When converting SQL events into XML you might wish to convert only certain changes to XML. You could build logic into your applier to skip events that do not interest you. However, a better way is to write a filter that drops uninteresting events. This approach results in two components that are simpler and can also be used independently.

Tungsten Replicator supports the ability to create large numbers of replica databases from one or two slaves. This is a very convenient solution for organizations that have highly variable database load over time and may need to scale up to high number of reads very rapidly.

One simple way to scale reads rapidly is to maintain a permanent donor slave that is just used for reading read-only copies. When your read load increases and you need more replicas, you can add virtual machines and provision new slaves on them from the donor as described in Section 3.4, “Provisioning New Slaves”. As soon as the replicas are no longer needed, you can just deactivate them.

Cloud environments like Amazon add some interesting twists to slave scaling. For example, Amazon Elastic Block Store supports point-in-time snapshots. You can use this feature to provision slave databases rapidly at the file system level without using database dump and restore, which tends to be slow and resource intensive.

A future revision of this manual will expand on using replication for fast scaling in virtual and cloud environments.

This chapter deals with Tungsten Replicator troubleshooting as well as tuning.

There are a number of performance trade-offs associated with replication. Tungsten Replicator includes several property settings that can help improve performance in specific situations.

SQL may fail on the slave for reasons unrelated to replication. For example, dropping a database that exists on the master but not on the slave will result in a failure on the slave side. There are two ways to skip over the failed statement.

The first method is to use the thl utility to find the failing SQL statement and skip over it. This is the recommended procedure for dealing with most failures. Here is the procedure.

You can also configure slave replicators to skip over failed statements automatically. To do this, set the applier failure policy in replicator.properties as shown below.

replicator.applier.failure_policy=warn
    
[Warning]Warning

It is your responsibility to ensure that there are no problems with data consisency due to skipped events. You should always ensure you understand the root cause for any applier failure. If you set the applier policy to 'warn', be aware that it applies to any event failure. This removes a fundamental check on replication--think carefully before using the 'warn' setting.

This chapter shows a simple failover scenario, where a master fails and must be replaced due to either a database server failure or a failure of the master replicator process.

To recover, proceed as follows:

After you have failed over to another master, you should repair the failed master. The first step is to analyze and correct any problems that led to the failure. Once this is done you can recover the master.

The standard way to recover a master is to provision it as a slave using the procedure described in Section 3.4, “Provisioning New Slaves” and then perform another master failover.

[Warning]Warning

Making a failed master a slave without first re-provisioning can lead to data inconsistencies if the master has unreplicated changes that were lost when failing over to a slave. Re-provisioning synchronizes database contents fully with the current master and avoids possible data problems.

Tungsten Replicator can be run from the command line interface or run as an operating system process. The following commands are available for Tungsten Replicator.

[Tip]Tip
To avoid file permission problems and possibility failures always use the correct account to start and stop Tungsten Replicator. By convention this account is continuent. You may use any account to run other commands, such as the trepctl command.

The Tungsten Replicator bin directory contains scripts that can be used to start and stop the Replicator from the command line prompt.

These instructions are only applicable for the Linux, Solaris, and Mac OS X operating systems.

Tungsten Replicator includes trepsvc which is based on the Java Service Wrapper (http://wrapper.tanukisoftware.org). This allows you to run Tungsten Replicator as a service that has protection against signals and it also implements the standard interface used by Unix Services. The service implementation also restarts Tungsten Replicator in the event of a crash.

The Tungsten Replicator service implementation supports services on 32-bit and 64-bit versions of Linux, and on Mac OS X platforms.

You can adjust the Tungsten Replicator service configuration by editing the conf/wrapper.properties configuration file. Please read the comments in the file for information on legal settings. For most installations, the included file should work out of the box.

On Linux hosts you can add trepsvc as a system service that will start and start automatically, using the chkconfig command, as shown in the following example:

ln -s /opt/tungsten/tungsten-replicator/bin/trepsvc \
    /etc/init.d/trepsvc
chkconfig --add trepsvc

If you are using the tungsten account as recommended, you should edit the trepsvc script and change the RUN_AS_USER to the correct account.

The trepsvc is a replacement for the trepstart and trepstop commands. The trepsvc commands are summarized below:

  • trepsvc start - This command starts the Tungsten Replicator service if it is not already running. Logs are written to log trepsvc.log.

  • trepsvc status - This command prints out the status of the Tungsten Replicator service, namely whether it is running and if it is, on which process number.

  • trepsvc stop - This command stops the Tungsten Replicator service if it is currently running.

  • trepsvc restart - This command restarts the Tungsten Replicator service, stopping it first if it is currently running.

  • trepsvc console - This command runs the Tungsten Replicator service in a Java console program that allows you to view log output in a GUI shell.

  • trepsvc dump - This command sends a 'kill -quit' signal to the Java VM to force it to write a thread dump to the log. This command is useful for debugging a stuck process.

[Note]Note

For maximum ease of use, set the replicator.auto_enable property to true so the replicator will go online automatically on start or restart. This allows you to start the replicator very quickly, for example using trepsvc restart.

The commands in the sections below change Tungsten Replicator state.

The trepctl script allows you to submit commands to Tungsten Replicator. These commands change the Tungsten Replicator state. The general syntax is as follows:

trepctl [global_options] command [command_options]

The following global options are supported.

Commands and their options are described below.

The Replicator THL Utility (thl) allows users to view and manipulate events in the of the Transaction History Log. Events are serialized into a platform-independent format that is not decipherable. The thl utility not only prints events in an easy-to-read format but also support operations to purge or skip THL events.

The thl utility resides in the bin directory. The command syntax is as follows:

thl [global-options] command [command-options]

The following sections provide details of commands and options.

[Warning]Warning

The thl utility must be used with caution. Purging or skipping events can lead to data inconsistencies between replicas or cause replicators to fail if used incautiously. Observe all caveats given below.

[Important]Important

The thl utility is under development and not all commands documented in this manual are fully functional yet. Any such lacunae will be filled shortly.

Each thl invocation contains a command that specifies the management operation to perform.

The Tungsten Replicator implements extractors, filters, and appliers as plug-ins. Users can write their own plug-ins to add specialized capabilities to the Tungsten Replicator, such as supporting new databases, replicating from databases into applications or message processing systems, or performing custom transformations on replicated data.

This chapter describes how plug-ins work and provides guidelines for writing new plug-ins.

ReplicatorPlugin implementations have the following life cycle.

  1. Configuration. Tungsten Replicator instantiates plug-ins and assigns their properties when processing a 'configure' administrative operation. All plug-ins are configured at this time. Configuration includes the following steps.

    [Warning]Warning

    The configuration stage should not allocate resources.

  2. Preparation. The Tungsten Replicator calls the plug-in prepare() method to allow the plug-in to allocate resources for operation. At this point the plug-in should login to databases, open files, and perform other operations to get ready for actual work.

    Appliers and post-storage filters are prepared when Tungsten Replicator goes into the SLAVE state. Extractors and pre-storage filters are prepared when Tungsten Replicator goes into the MASTER command.

  3. Release. The Tungsten Replicator calls the plug-in release() method when the plug-in is about to be de-allocated. The plug-in is responsible for cleaning up all resources at this time.

Between preparation and release the plug-in is active and handles calls specific to that plug-in type. Plug-in-specific calls are never called at any other time.

Properties in replicator.properties are mapped to setters on the plug-in instance according to the following rules.

[Warning]Warning
It is tempting to try to process properties directly by calling PluginContext.getReplicatorProperties() rather than using setter methods. You must resist this temptation! Setters are type-safe and allow Tungsten Replicator to perform automatic validity checks of property assignments. Also, processing properties directly can result in complex or brittle configuration that is likely to fail if the property file format changes.

This appendix describes the Tungsten Replicator plug-ins.

[Note]Note

Plug-in properties use short names. When actually using a property you must prefix it as described in Section 3.1.4, “Static Properties and the replicator.properties File”.

The filter plug-ins are described in the chapters below.

- Continuent -