Tags

,

Production applications typically have a separate environment for disaster recovery and business continuity. Depending on the needs of the application, this may be a hot back up or warm standby. Either way you need to have your data replicated at your DR environment.

For riak clusters there is a simple way of doing this. By taking advantage of a post commit hook, you can have every object written to riak pushed to a node in a separate cluster. I’ve written a simple erlang library called doppelganger that does this. This approach keeps the clusters separate yet in a mirrored state as the diagram below illustrates.

To use the library you need to do three things on your primary environment.

  1. Add doppelganger to your primary riak environment
  2. Register doppelganger as a default post commit hook
  3. Set the options for doppelganger in your riak app.config

Note that the secondary environment needs no configuration as it is merely the target of the replication.

Adding doppelganger to riak

In riak’s app.config, add a term for custom erlang modules. You can specify any path you like, and I typically use something like /etc/riak/erlang.

{add_paths, ["/etc/riak/erlang"]}

Build doppelganger by running make and drop the beams there.

Register doppelganger as a post commit hook

If your riak_core section in the app.config does not have a default_bucket_props, then add the below term.

{default_bucket_props, [
  {postcommit, [
    {struct, [{<<"mod">>,<<"doppelganger">>}, {<<"fun">>,<<"replicate">>}] }
   ]}
]}

Set options for doppelganger

Doppelganger is meant to be unobtrusive. The only options it supports are: enabling the module and setting the target host and port. This configuration goes into a separate section in the app.config.

{doppelganger, [
  {active, true},
  {riak_host, "your-doppelganger-host" },
  {riak_port, 8081} % Should be your PB port 
]}

Once these steps are complete, fire up your secondary environment and then your primary environment. You should see the postcommit hook registered in any buckets you have defined and upon posting data to the primary it will appear in the secondary environment.

Future Plans

The next step is to handle network partitions or node failure in the secondary environment to ensure no data is lost. I also need to preserve the riak object meta data to ensure the replication is as close to the original data as possible.