High Availability, RunAbove Virtual IP

Create a Virtual IP for critical service fail over.

Edouard Ravel, in 22 March 2015

High Availability, with a Virtual IP.

Virtual IP addresses are useful for making critical services highly available. For example load balancers often function in an active/passive server pair. The active load balancer listens on a virtual IP and applications connect to the virtual IP. Should the load balancer fail, the virtual IP will be attached to the passive load balancer. Application connections will break and then reconnect to the VIP again, which points them to the new load balancer.

What good is a load balancer if your load balancer is your single point of failure? Whilst we won't cover the installation of a load balancer, we will create two instances which share a virtual IP for fail over scenarios.

Services and Tools.

Prerequisites.

Things you will need:

Local Setup.

To install the packages and the dependencies you will need:

$ sudo apt-get install python-dev python-neutronclient python-novaclient

Runabove.

Built with DevOps in mind, RunAbove is an IaaS solution that combines the power of bare-metal with the flexibility and high-availability of the public cloud. Runabove offers excellent value for money.

RunAbove Cloud Computing.

The easiest way to communicate with the RunAbove Cloud (GUI asside) is with the neutron and nova clients. To export the information you need to authenticate with the Openstack API you can run . tenantid-openrc.sh.

To get this file [1]:

Log in on RunAbove, select OpenStack Horizon, go into Access & Security panel, then into API Access tab. Once there you can click on: Download OpenStack RC File.

It should look something like this:

Download the Runabove OpenStack RC File
Download the Runabove OpenStack RC File

Setup Environment Variables.

Now that you have the openrc.sh file you can setup your Environment Variables.

source *-openrc.sh
export OS_REGION_NAME="SBG-1"

RunAbove Neutron Network Setup.

First we want to check if the nova and neutron clients can communicate with the RunAbove Openstack API.

$ nova list

It should return something along the lines of:

$ nova list
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+

Creating a Private Network.

We are going to need a private network so that RunAbove instances can communicate amongst each other, we will specify a small subnet and reserve a few IPs for future use as Virtual IPs.

$ neutron net-list
+--------------------------------------+---------+-----------------------------------------------------+
| id                                   | name    | subnets                                             |
+--------------------------------------+---------+-----------------------------------------------------+
| f5cc56db-db25-4488-8371-c507951b2631 | Ext-Net | 2c56a226-e78b-4268-b3d4-96e61e4fc0fe 92.222.64.0/19 |
+--------------------------------------+---------+-----------------------------------------------------+

Neutron displayed the public network id, name and associated subnet. Lets create our own private network.

$ neutron net-create Int-Net
$ neutron net-create Int-Net
Created a new network:
+----------------+--------------------------------------+
| Field          | Value                                |
+----------------+--------------------------------------+
| admin_state_up | True                                 |
| id             | aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb |
| name           | Int-Net                              |
| shared         | False                                |
| status         | ACTIVE                               |
| subnets        |                                      |
| tenant_id      | 1234568912345689123456891234         |
+----------------+--------------------------------------+

We have our network, all we now need is a subnet inside of it.

$ neutron subnet-create Int-Net 10.0.0.0/24 --name virtual-ip --allocation-pool start=10.0.0.21,end=10.0.0.254
$ neutron subnet-create Int-Net 10.0.0.0/24 --name virtual-ip --allocation-pool start=10.0.0.21,end=10.0.0.254
Created a new subnet:
+------------------+---------------------------------------------+
| Field            | Value                                       |
+------------------+---------------------------------------------+
| allocation_pools | {"start": "10.0.0.21", "end": "10.0.0.254"} |
| cidr             | 10.0.0.0/24                                 |
| dns_nameservers  |                                             |
| enable_dhcp      | True                                        |
| gateway_ip       | 10.0.0.1                                    |
| host_routes      |                                             |
| id               | cccccccc-dddd-1111-2222-ffffffffffff        |
| ip_version       | 4                                           |
| name             | virtual-ip                                  |
| network_id       | aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb        |
| tenant_id        | 1234568912345689123456891234                |
+------------------+---------------------------------------------+

Checking our Private Network.

$ neutron net-list
+--------------------------------------+---------+-----------------------------------------------------+
| id                                   | name    | subnets                                             |
+--------------------------------------+---------+-----------------------------------------------------+
| aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb | Int-Net | cccccccc-dddd-1111-2222-ffffffffffff 10.0.0.0/24    |
| f5cc56db-db25-4488-8371-c507951b2631 | Ext-Net | 2c56a226-e78b-4268-b3d4-96e61e4fc0fe 92.222.64.0/19 |
+--------------------------------------+---------+-----------------------------------------------------+

Our Private Network is almost good to go, please note the net IDs you will need them. We will get back to creating Virtual IPs and attaching them to instances later.

Security groups.

We want our instances to be reachable by ping and SSH.

$ neutron security-group-rule-create  --protocol icmp default
$ neutron security-group-rule-create  --protocol tcp  --port-range-min 22 --port-range-max 22 default

Creating the instances.

To create the private network at creation time will provide user data to instances in nova boot.

A user data file is a special key in the metadata service that holds a file that cloud-aware applications in the guest instance can access. For example the cloud-init system, which is an open-source package from Ubuntu which handles early initialization of a cloud instance.

User Data File

Lets create virtual-ip.sh.

#!/bin/sh
# _virtual-ip.sh

echo '# The private network interface' >> /etc/network/interfaces
echo 'auto eth1' >> /etc/network/interfaces
echo 'iface eth1 inet dhcp' >> /etc/network/interfaces
apt-get update
shutdown -r now

Launching the Instances

We will use nova boot to create the instances and designate a private network interface, we will also submit virtual-ip.sh for configuration of both the private network.

Gathering Information

On top of our net IDs we will want to pick a flavor and image ID as well as our key-pair.

$ nova flavor-list
+--------------------------------------+---------------+-----------+------+-----------+------+-------+-------------+-----------+
| ID                                   | Name          | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+--------------------------------------+---------------+-----------+------+-----------+------+-------+-------------+-----------+
| 283d057a-2ef9-4ea0-88ec-f54b78d515ba | ra.intel.ha.l | 8192      | 80   | 0         |      | 2     | 1.0         | True      |
| 551dc104-4174-495a-af34-4aafe75f22ca | ra.intel.sb.l | 4096      | 30   | 0         |      | 1     | 1.0         | True      |
| 8f79ef0d-59ad-4792-82cd-829e0bb94f6b | ra.intel.ha.s | 2048      | 10   | 0         |      | 1     | 1.0         | True      |
| eb0aa3b3-f8d1-4dfa-854e-7990b14bc705 | ra.intel.ha.m | 4096      | 40   | 0         |      | 1     | 1.0         | True      |
| faa2002f-9057-4fe1-8401-fed7edb34059 | ra.intel.sb.m | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
+--------------------------------------+---------------+-----------+------+-----------+------+-------+-------------+-----------+
$ glance image-list
+--------------------------------------+------------------------+-------------+------------------+------------+--------+
| ID                                   | Name                   | Disk Format | Container Format | Size       | Status |
+--------------------------------------+------------------------+-------------+------------------+------------+--------+
| b8c362ce-cf13-4fc6-809c-218ce67fd78b | ap                     | qcow2       | bare             | 1659895808 | active |
| b2995d24-7c0b-437e-a2a6-5171181b2645 | CentOS 6               | qcow2       | bare             | 414242304  | active |
| 691f039a-aa19-47a7-b883-178b4e725cc7 | CentOS 6               | qcow2       | bare             | 577053184  | active |
| 9823f3b2-21b7-4591-8179-cf9be4d0a0a8 | CentOS 7               | qcow2       | bare             | 433732608  | active |
| 357cc131-0acd-4991-b0c2-cbcdc78d8c85 | CentOS 7               | qcow2       | bare             | 413734400  | active |
| 83872985-9101-4051-be83-22c30cc0fe89 | Debian 7               | qcow2       | bare             | 363578880  | active |
| 51152115-60df-4b21-957e-9ef962466d8b | Debian 7               | qcow2       | bare             | 410189824  | active |
| afa4dd1b-ff18-4214-9c43-44becd9026c6 | Dokku                  | qcow2       | bare             | 602865152  | active |
| 3c451df3-356f-4f98-90a1-3e92f392f177 | Fedora 19              | qcow2       | bare             | 532803072  | active |
| 1558d5e0-7ae3-4c6f-88b4-5927ed2a5333 | Fedora 19              | qcow2       | bare             | 766028800  | active |
| c640843e-412e-46bb-9687-94a8bf95ac0e | Fedora 19 Power 8      | qcow2       | bare             | 726866944  | active |
| 371ff458-bf67-47b3-81ff-6d2a393b243a | Fedora 20              | qcow2       | bare             | 543393280  | active |
| e7931a4d-2702-4277-ace9-087d8579d262 | Fedora 20              | qcow2       | bare             | 762748416  | active |
| b2ffb4d6-4e49-4243-b418-410493e165c2 | Ubuntu 12.04           | qcow2       | bare             | 564650496  | active |
| ecd32f1e-e921-4f7a-80b8-6f53a8f18fa2 | Ubuntu 12.04           | qcow2       | bare             | 563720704  | active |
| d0e8d240-0bfa-4415-88d2-9c5ee77c7e9f | Ubuntu 14.04           | qcow2       | bare             | 586631168  | active |
| 1b6b6361-9532-4533-a888-7a99d2c8b7cf | Ubuntu 14.04           | qcow2       | bare             | 621223424  | active |
| bf8927ce-5dda-4739-b09d-5604de5a8e06 | Ubuntu 14.04 Power 8   | qcow2       | bare             | 827981824  | active |
| 476570e2-ad54-4072-b74f-9391bee1a4a3 | Ubuntu 14.10           | qcow2       | bare             | 538364928  | active |
| 3f7ade4c-ed24-4128-816c-8727c6c95957 | Windows Server 2012 R2 | qcow2       | bare             | 4499260511 | active |
+--------------------------------------+------------------------+-------------+------------------+------------+--------+
$ nova keypair-list
+---------------+-------------------------------------------------+
| Name          | Fingerprint                                     |
+---------------+-------------------------------------------------+
| key-pair      | ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff |
+---------------+-------------------------------------------------+

Nova boot.

Run nova boot twice specifying instance names at the end.

$ nova boot --flavor ra.intel.sb.m --key_name key-pair --image d0e8d240-0bfa-4415-88d2-9c5ee77c7e9f --nic net-id=f5cc56db-db25-4488-8371-c507951b2631 --nic net-id=aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb --user-data virtual-ip.sh lb1
$ nova boot --flavor ra.intel.sb.m --key_name key-pair --image d0e8d240-0bfa-4415-88d2-9c5ee77c7e9f --nic net-id=f5cc56db-db25-4488-8371-c507951b2631 --nic net-id=aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb --user-data virtual-ip.sh lb2
+--------------------------------------+------------------------------------------------------+
| Property                             | Value                                                |
+--------------------------------------+------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                               |
| OS-EXT-AZ:availability_zone          | nova                                                 |
| OS-EXT-STS:power_state               | 0                                                    |
| OS-EXT-STS:task_state                | scheduling                                           |
| OS-EXT-STS:vm_state                  | building                                             |
| OS-SRV-USG:launched_at               | -                                                    |
| OS-SRV-USG:terminated_at             | -                                                    |
| accessIPv4                           |                                                      |
| accessIPv6                           |                                                      |
| adminPass                            | znM2gNGCZ2mL                                         |
| config_drive                         |                                                      |
| created                              | 2015-03-22T15:47:12Z                                 |
| flavor                               | ra.intel.sb.m (faa2002f-9057-4fe1-8401-fed7edb34059) |
| hostId                               |                                                      |
| id                                   | becfcfee-8d6b-40b0-b448-aad56a5a08e0                 |
| image                                | Ubuntu 14.04 (d0e8d240-0bfa-4415-88d2-9c5ee77c7e9f)  |
| key_name                             | key-pair                                        |
| metadata                             | {}                                                   |
| name                                 | lb2                                                  |
| os-extended-volumes:volumes_attached | []                                                   |
| progress                             | 0                                                    |
| security_groups                      | default                                              |
| status                               | BUILD                                                |
| tenant_id                            | 1234568912345689123456891234                         |
| updated                              | 2015-03-22T15:47:13Z                                 |
| user_id                              | 123456891234568912345689aaaa                         |
+--------------------------------------+------------------------------------------------------+

Checking instance status.

$ nova list
+--------------------------------------+------+--------+------------+-------------+------------------------------------------+
| ID                                   | Name | Status | Task State | Power State | Networks                                 |
+--------------------------------------+------+--------+------------+-------------+------------------------------------------+
| d21201d5-9ab8-4be6-b330-c541757bbe65 | lb1  | ACTIVE | -          | Running     | Ext-Net=92.222.64.223; Int-Net=10.0.0.21 |
| 41c72076-9e56-4eb1-8687-e316f4ada327 | lb2  | ACTIVE | -          | Running     | Ext-Net=92.222.64.20; Int-Net=10.0.0.22  |
+--------------------------------------+------+--------+------------+-------------+------------------------------------------+

Our instances are up and running.

Creating a Virtual IP

$ neutron port-create --fixed-ip ip_address=10.0.0.20 --security-group default Int-net
Created a new port:
+-----------------------+----------------------------------------------------------------------------------+
| Field                 | Value                                                                            |
+-----------------------+----------------------------------------------------------------------------------+
| admin_state_up        | True                                                                             |
| allowed_address_pairs |                                                                                  |
| device_id             |                                                                                  |
| device_owner          |                                                                                  |
| fixed_ips             | {"subnet_id": "cccccccc-dddd-1111-2222-ffffffffffff", "ip_address": "10.0.0.20"} |
| id                    | 97ddb47b-5ddb-4b95-9a63-006823ce6815                                             |
| mac_address           | fa:16:3e:fa:05:fa                                                                |
| name                  |                                                                                  |
| network_id            | aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb                                             |
| security_groups       | ffffffff-0000-1111-2222-333333333333                                             |
| status                | DOWN                                                                             |
| tenant_id             | 1234568912345689123456891234                                                     |
+-----------------------+----------------------------------------------------------------------------------+

We are going to list the currently used network ports.

$ neutron port-list
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                            |
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
| 0c7c668a-3992-4d50-b925-3f991843bb22 |      | fa:16:3e:fa:01:fa | {"subnet_id": "2c56a226-e78b-4268-b3d4-96e61e4fc0fe", "ip_address": "92.222.92.222"} |
| d836125f-1a51-460b-97a4-02665b848e12 |      | fa:16:3e:fa:02:fa | {"subnet_id": "2c56a226-e78b-4268-b3d4-96e61e4fc0fe", "ip_address": "92.222.91.111"} |
| 4613357d-a6ca-4031-bd6c-19574df6e456 |      | fa:16:3e:fa:03:fa | {"subnet_id": "cccccccc-dddd-1111-2222-ffffffffffff", "ip_address": "10.0.0.21"}     |
| 5f7b7b97-87c6-4118-addd-0a8fa0a5d353 |      | fa:16:3e:fa:04:fa | {"subnet_id": "cccccccc-dddd-1111-2222-ffffffffffff", "ip_address": "10.0.0.22"}     |
| 97ddb47b-5ddb-4b95-9a63-006823ce6815 |      | fa:16:3e:fa:05:fa | {"subnet_id": "cccccccc-dddd-1111-2222-ffffffffffff", "ip_address": "10.0.0.20"}     |
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+

Lets allow the virtual IP as address pair for both our instance private network ports.

$ neutron port-update  7d5e877d-d8c9-4f9f-9bdb-c07444c3b682 --allowed_address_pairs list=true type=dict ip_address=10.0.0.20
$ neutron port-update  7b07f88d-f679-4839-ad3f-a618aa7ea816 --allowed_address_pairs list=true type=dict ip_address=10.0.0.20

To verify if the pairing was successful we can show port details.

$ neutron port-show 7d5e877d-d8c9-4f9f-9bdb-c07444c3b682
+-----------------------+----------------------------------------------------------------------------------+
| Field                 | Value                                                                            |
+-----------------------+----------------------------------------------------------------------------------+
| admin_state_up        | True                                                                             |
| allowed_address_pairs | {"ip_address": "10.0.0.20", "mac_address": "fa:16:3e:fa:05:fa"}                  |
| device_id             | 452e2bf2-b86e-4c64-a206-fd8484940c90                                             |
| device_owner          | compute:None                                                                     |
| extra_dhcp_opts       |                                                                                  |
| fixed_ips             | {"subnet_id": "cccccccc-dddd-1111-2222-ffffffffffff", "ip_address": "10.0.0.23"} |
| id                    | 7d5e877d-d8c9-4f9f-9bdb-c07444c3b682                                             |
| mac_address           | fa:16:3e:fa:05:fa                                                                |
| name                  |                                                                                  |
| network_id            | aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb                                             |
| security_groups       | ffffffff-0000-1111-2222-333333333333                                             |
| status                | DOWN                                                                             |
| tenant_id             | 1234568912345689123456891234                                                     |
+-----------------------+----------------------------------------------------------------------------------+

Keepalived.

Keepalived is a routing software. The main goal of keepalived is to provide simple and robust facilities for load balancing and high-availability Linux based infrastructures. High-availability is achieved by VRRP protocol. VRRP is a fundamental brick for router fail over.

We will install keepalived and it's dependencies. Lastly, In order to be able to bind on a virtual IP not yet defined on the system, we need to enable non local binding. Considering we're not setting up any service right away this isn't explicitly necessary, we might however want to get this done now so that we don't have to come back to this in the future.

sudo su -
apt-get install keepalived
echo 'net.ipv4.ip_nonlocal_bind = 1' >> /etc/sysctl.conf
sysctl -p

Keepalived configuration.

The keepalived configuration resides in: /etc/keepalived/keepalived.conf. We will need to create this file on both instances.

! Configuration File for keepalived on lb1

vrrp_instance example {
    state MASTER
    interface eth1
    virtual_router_id 1
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass example
    }
    virtual_ipaddress {
        10.0.0.20
    }
}
! Configuration File for keepalived on lb2

vrrp_instance example {
    state BACKUP
    interface eth1
    virtual_router_id 1
    priority 75
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass example
    }
    virtual_ipaddress {
        10.0.0.20
    }
}

We will need to (re)start keepalived on both instances, service keepalived start. Once this is done we can test the fail over by pinging the virtual IP from lb2 and taking down the private interface on lb1 with ifdown eth1. This should result in:

$ ping 10.0.0.20
PING 10.0.0.20 (10.0.0.20) 56(84) bytes of data.
64 bytes from 10.0.0.20: icmp_seq=1 ttl=64 time=0.554 ms
64 bytes from 10.0.0.20: icmp_seq=2 ttl=64 time=0.224 ms
64 bytes from 10.0.0.20: icmp_seq=3 ttl=64 time=0.245 ms
64 bytes from 10.0.0.20: icmp_seq=4 ttl=64 time=0.269 ms
64 bytes from 10.0.0.20: icmp_seq=5 ttl=64 time=0.199 ms
64 bytes from 10.0.0.20: icmp_seq=6 ttl=64 time=0.263 ms
64 bytes from 10.0.0.20: icmp_seq=7 ttl=64 time=0.230 ms
64 bytes from 10.0.0.20: icmp_seq=8 ttl=64 time=0.080 ms
64 bytes from 10.0.0.20: icmp_seq=9 ttl=64 time=0.049 ms
64 bytes from 10.0.0.20: icmp_seq=10 ttl=64 time=0.062 ms
64 bytes from 10.0.0.20: icmp_seq=11 ttl=64 time=0.038 ms
64 bytes from 10.0.0.20: icmp_seq=12 ttl=64 time=0.037 ms
64 bytes from 10.0.0.20: icmp_seq=13 ttl=64 time=0.053 ms
64 bytes from 10.0.0.20: icmp_seq=14 ttl=64 time=0.041 ms

It's possible there will be some packet loss while the IP routes to its new destination, this is however negligible as it is a fail over solution. Rerouteing took between 1 and 3 seconds on the above setup.


Sources:

*1 How to use OpenStack command line tools - Runabove