Skip to content

gbmeuk/consul

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

48 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Consul with the Autopilot Pattern

Consul in Docker, designed to be self-operating according to the autopilot pattern. This application demonstrates support for configuring the Consul raft so it can be used as a highly-available discovery catalog for other applications using the Autopilot pattern.

DockerPulls DockerStars ImageLayers Join the chat at https://gitter.im/autopilotpattern/general

Using Consul with ContainerPilot

This design starts up all Consul instances with the -bootstrap-expect flag. This option tells Consul how many nodes we expect and automatically bootstraps when that many servers are available. We still need to tell Consul how to find the other nodes, and this is where Triton Container Name Service (CNS) and ContainerPilot come into play.

The ContainerPilot configuration has a management script that is run on each health check interval. This management script runs consul info and gets the number of peers for this node. If the number of peers is not equal to 2 (there are 3 nodes in total but a node isn't its own peer), then the script will attempt to find another node via consul join. This command will use the Triton CNS name for the Consul service. Because each node is automatically added to the A Record for the CNS name when it starts, the nodes will all eventually find at least one other node and bootstrap the Consul raft.

When run locally for testing, we don't have access to Triton CNS. The local-compose.yml file uses the v2 Compose API, which automatically creates a user-defined network and allows us to use Docker DNS for the service.

Run it!

  1. Get a Joyent account and add your SSH key.
  2. Install the Docker Toolbox (including docker and docker-compose) on your laptop or other environment, as well as the Joyent Triton CLI (triton replaces our old sdc-* CLI tools).

Check that everything is configured correctly by running ./setup.sh. This will check that your environment is setup correctly and will create an _env file that includes injecting an environment variable for a service name for Consul in Triton CNS. We'll use this CNS name to bootstrap the cluster.

$ docker-compose up -d
Creating consul_consul_1

$ docker-compose scale consul=3
Creating and starting consul_consul_2 ...
Creating and starting consul_consul_3 ...

$ docker-compose ps
Name                        Command                 State       Ports
--------------------------------------------------------------------------------
consul_consul_1   /usr/local/bin/containerpilot...   Up   53/tcp, 53/udp,
                                                          8300/tcp, 8301/tcp,
                                                          8301/udp, 8302/tcp,
                                                          8302/udp, 8400/tcp,
                                                          0.0.0.0:8500->8500/tcp
consul_consul_2   /usr/local/bin/containerpilot...   Up   53/tcp, 53/udp,
                                                          8300/tcp, 8301/tcp,
                                                          8301/udp, 8302/tcp,
                                                          8302/udp, 8400/tcp,
                                                          0.0.0.0:8500->8500/tcp
consul_consul_3   /usr/local/bin/containerpilot...   Up   53/tcp, 53/udp,
                                                          8300/tcp, 8301/tcp,
                                                          8301/udp, 8302/tcp,
                                                          8302/udp, 8400/tcp,
                                                          0.0.0.0:8500->8500/tcp

$ docker exec -it consul_consul_3 consul info | grep num_peers
    num_peers = 2

Using this in your own composition

The Consul service definition can be dropped into any Docker Compose file. Set the ContainerPilot configuration for each other service to use the CONSUL environment variable as its Consul target and populate this with the CNS name. On Triton, you should consider using a Consul agent in the application container as a coprocess, and point this agent to the CONSUL environment variable. The relevant section of the ContainerPilot configuration might look like this:

{
  "consul": "localhost:8500",
  "coprocesses": [
    {
      "command": ["/usr/local/bin/consul", "agent",
                  "-data-dir=/data",
                  "-config-dir=/config",
                  "-rejoin",
                  "-retry-join", "{{ .CONSUL }}",
                  "-retry-max", "10",
                  "-retry-interval", "10s"],
      "restarts": "unlimited"
    }]
  }
}

A more detailed example of a ContainerPilot configuration that uses a Consul agent co-process can be found in autopilotpattern/nginx.

Triton-specific availability advantages

Some details about how Docker containers work on Triton have specific bearing on the durability and availability of this service:

  1. Docker containers are first-order objects on Triton. They run on bare metal, and their overall availability is similar or better than what you expect of a virtual machine in other environments.
  2. Docker containers on Triton preserve their IP and any data on disk when they reboot.
  3. Linked containers in Docker Compose on Triton are distributed across multiple unique physical nodes for maximum availability in the case of node failures.

Credit where it's due

This project builds on the fine examples set by Jeff Lindsay's (Glider Labs) Consul in Docker work. It also, obviously, wouldn't be possible without the outstanding work of the Hashicorp team that made consul.io.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • JavaScript 72.6%
  • Shell 24.0%
  • Makefile 3.4%