Configuring Slony-I Cascading Replication On PostgreSQL 8.3


This guide is aimed at users that would like to configure Slony-I to replicate a database from a master to a slave on different hosts.

If you are new to Slony-I, please see terminology at the bottom of this page, and be sure to read over:



This guide assumes that Postgres and Slony-I have been installed.



The design of the replication for this guide is as follows:

There will be a master node on host hostM with a database db. We want to replicate db to another node on hostS. hostS will be the slave node. The replication will be one way so that only updates on db will be replicated down to hostS.



Slony uses an application called slonik to initialize, mould and update the databases in a cluster. For slony to operate, slony will create tables in the databases participating in replication. slonik executes a number of commands on the databases such as creating tables and modifying them so that they are initialized for use with slony.

In order for slonik to do this, slonik must be able to connect to your database on hostM and hostS.

To test this out, try the following from the master host, hostM.

psql -d db -h hostM -U slony
psql -d db -h hostS -U slony

Note that in the examples above, i'm trying to connect as user 'slony'. This is a superuser that I have created on postgres which will be used only for replication. This isn't necessary and you could instead connect using the standard 'postgres' user.

Right, so if that works, the slonik should work. If that didn't work, then you may need to try a couple things:

1. Check the host is up

ping hostX

2. Check your firewall is not blocking the connection (usually on slave node).

#root:# iptables -L

Scan the output of the above command and see whether it is accepting on port 5432, the standard Postgres port.

3. Check your postgres.conf file

#postgres:# cd $PGDATA
#postgres:# grep listen_addresses *.conf

You want to see something like listen_addresses='*'. If there is a hash in front of the line, then it has been commented out (default). Use your favourite text editor to change this.

4. Check the postgres "firewall". Postgres uses an access control list to control who can connect to what database. It is well documented here, and I recommend everyone knows how to configure this file:

The following two lines would tell postgres that the user slony can be accepted if trying to connect from either hostS or hostM. Note that this file is parsed in order, and the first match is used. Also note that I have configured it only to use IP and not the local unix socket because the slonik commands to be used later will be referencing the nodes using IP addresses.

 host all slony <serverIP> trust
 host all slony <clientIP> trust

Hopefully, if you've got this far, your slonik will now be able to talk to all the nodes, and we can carry on.


Slonik Initialization

Slony can do a number of things, but one thing it cannot do is copy schema and schema changes. To this effect, before initializing slony, we must copy the DDL (the structure/schema) of the database to the slave node. We can use the following commands:

#slony@master#: pg_dumpall -s -c -d db | gzip ddl.sql.gz

<copy file to slave>

#slony@slave#: psql -d db < gunzip ddl.sql.gz

From here, we can start executing slonik scripts to get the databases talking before we start copying data. When a slonik command is executed, it will try to establish a conneciton with the slave nodes and propogate the information. This initial step creates the links necessary to get the nodes talking...

I like to do this is one script (on the server, although it shouldnt matter) as follows:

define CLUSTER cname;
define PRIMARY 1;
define SLAVE 10;
cluster name = @CLUSTER;

node @PRIMARY   admin conninfo = 'dbname=db host=hostM user=slony';

node @SLAVE admin conninfo = 'dbname=db host=hostS user=slony';

init cluster (id=@PRIMARY, comment='Primary Slony Node');

store node (id=@SLAVE, event node=@PRIMARY, comment='Slave Slony Node');

store path (server=@PRIMARY, client=@SLAVE, conninfo='dbname=db host=hostM user=slony');

store path (server=@SLAVE, client=@PRIMARY, conninfo='dbname=db host=hostS user=slony');

For information on these commands, see the slony docs here:

Here is a quick run through of what the commands do here:

line 1 tells bash what command can be used to run this script. So, if you change the script to be executable, it will be run by slonik.

The define commands operate like standard macro defines and allow for more readable scripts. It also helps to avoid typo style errors.

'node' informs slony of the existence of a node and how to connect to it.

'init cluster' creates the slony tables on the master node.

'store node' creates the  slony tables on the slave nodes.

'store path' creates a uni-directional path for communication between two nodes.

Next we can run the script:

#slony@server# chmod u+x 
#slony@server# ./
#slony@server# psql -U slony -h hostM -d db -c "\n" | grep slony
#slony@server# psql -U slony -h hostS -d db -c "\n" | grep slony

 Hopefully the output of the last 2 commands will show that there is a schema called "_slony" in the host and client databases.

If everything was successful, then we can move onto starting the replication daemons.



The slon daemons are the active agents for slony. They act of triggers in the database and send the data across to the slave nodes. For each node in the database, there must be a slon daemon running.

to start the slon daemon on the server, type:

slon cname "dbname=db host=hostM user=slony"

You should see a bunch of messages fly past.

to start the slon daemon on the slave, type:

slon cname "dbname=db host=hostS user=slony"

You should now see a bunch of message fly past, and if you're watching the server console, you will see some more messages flying there.

Check that there aren't any errors, if you've been through the connection steps above, then you shouldn't have any errors here.

At this point in time, the nodes are all talking to each other, waiting to hear if there is any data that needs copying. We have not defined what data should be copied yet and so the daemons will be quite idle. In order to start replicating, we need to subscribe some data.



The following script can be run from the server, and it will tell slony to create triggers on sets of tables, and replicate when data is modified, added or deleted.

This script will need to be adjust for the tables and sequences that you want to replicate.

define CLUSTER cname;
define PRIMARY 1;
define SLAVE 10;
cluster name = @CLUSTER;

node @PRIMARY admin conninfo = 'dbname=db host=hostM user=slony';

node @SLAVE admin conninfo = 'dbname=db host=hostS user=slony';

create set (id=1, origin=@PRIMARY, comment='set1');

set add table (id=1, set id=1, origin = @PRIMARY, fully qualified name = 'schema.tablename', comment = 'table');

set add sequence (id=1, set id=1, origin = @PRIMARY, fully qualified name = 'schema.sequencename', comment = 'seq');

subscribe set (id = 1, provider = @PRIMARY, receiver = @SLAVE); 

After running this script, the consoles showing the slon daemons should be showing that data is being copied across. Note that in general you wouldn't run the slon daemons as foreground processes, but rather use the runlevel script (installed to /etc/init.d if you used an RPM for installation).

That's it for now.

Hope this tutorial has helped enlighten you and please feel free to leave comments, point out errors, or ask for more.




A node is a term for a host and a respective database. For example, 2 nodes could reside on the same server, but point to different databases.

A master node refers to the node that the data will be copied from.

A slave node refers to the node that the data will be copied to.

A cluster refers to a collection of nodes that interact during replication.

Share this page:

1 Comment(s)