Configuring a multistage environment with Ansible and Vagrant

Ansible is a powerful and clean tool for automation. This post covers the configuration of a multistaging environment, consisting of one local development environment controlled by Vagrant, and one or more remote servers (staging, production etc) that will be controlled directly by Ansible, reusing a pre-existent development environment provisioning.

 

Written by Erika Heidi on Wednesday August 13, 2014 - Permalink - Categories: DevOps, Linux, Vagrant - Tags: devops, vagrant, ansible, deployment, multistage - Lang: eng

These instructions cover the server and control machine configuration needed to run Ansible in a multistage environment, using Vagrant for controlling a local dev VM and one or more (production, staging) remote servers that will be controlled via Ansible. This way you can reuse most part of your Vagrant provisioning to create a powerful deployment strategy.

I'm using Ubuntu for both the servers and the controlling machine. This post is inspired by the excellent post from Ross Tuck about multistage environments with Ansible. For more information on how to organize your inventories into groups and how to use group_vars to make your playbooks more flexible, please have a look at his post. You can also have a look at what I did for phansible.com, by checking its repository on GitHub.

Here you will find how to configure servers and local environment to handle the multistage env with Vagrant and Ansible. 

Assumptions:

  • You are already familiar with Vagrant and Ansible
  • You have a functional Vagrant provisioning with Ansible for a local dev enviroment
  • You have a remote, brand-new server / VPS to use as staging or production environment (superuser credentials needed)

To simplify the post, I will consider two environments: local development environment (controlled via Vagrant) and a staging environment, a remote VPS. Both environments consist of only one machine (a webserver). You can easily escalate this to your needs.

Why Staging? To get started with automated deploys, I strongly recommend that you have a staging environment to test your provisioning before running it on production. Once you make sure everything is working as you want, it's just a matter of adding a new environment. I don't recommend trying this first on a production server.

1. Setting up your Inventories

By default, Vagrant automatically creates an inventory file, and keeps it inside a hidden folder in your project root - there's a chance you never even saw it. In order to control multiple environments, we'll need to create our very own inventory files. We need one inventory per environment, so for this example we'll have two inventories: dev and staging.

#ansible/inventories/dev
[webservers]
192.168.56.121
#ansible/inventories/staging
[webservers]
104.131.202.76

The dev inventory should cointain the same IP address used in your Vagrantfile. The staging inventory, naturally, will have the IP address of your staging server.

2. Configuring your Vagrantfile

Now we need to update our Vagrantfile to make sure it uses our new dev inventory file. We basically need two more lines in our Ansible block:

config.vm.provision "ansible" do |ansible|
    ansible.playbook = "ansible/provision.yml"
    ansible.inventory_path = "ansible/inventories/dev"
    ansible.limit = 'all'
end

The ansible.limit option is a recent requirement when using custom inventories. Without this option, we would have to specify the name of the machine in our inventory, and it should match the name of the Vagrant machine.

Run vagrant up (or reload / provision) to make sure it's working.

3. Configuring the Server

3.1 Create a user

Log in as root. Create a user in the web server. In this example I will use "deploy" as the username.

$ adduser deploy

3.2 Add the user to sudoers

This user will need permission to run sudo without being asked for a password. This is important to run Ansible without having to provide extra parameters. Edit the sudoers file by running:

$ visudo

And add this to the end:

deploy ALL=(ALL) NOPASSWD: ALL

This will ensure that the user "deploy"  won't need to provide a password when executing commands with sudo.

4. Setting up a new SSH key-pair for deployment

In order to run ansible and ansible-playbook smoothly, we need to use key-pair authentication, and it's strongly recommended that you have a special key for each external environment. Let's create a new key-pair for the staging environment. Be carefull to not overwrite your existing keys. Leave the passphrase blank and choose a name so you won't confuse this key with your personal keys. I'm using server-staging as the name.

$ ssh-keygen -t rsa -C "deploy@stagingserver.com" 

Now copy the key to the server, by running:

$ ssh-copy-id -i server-staging deploy@stagingserver.com

Where server-staging corresponds to the key name, and stagingserver.com is the hostname of your server (IP address would be fine as well). You will be asked to provide the password for user deploy in this server (the user you just created in the previous step).

Test the connection - if everything is allright, you should be able to login without providing any password. As this is not your default keypar (for your current user), you should provide the key path as an argument:

$ ssh deploy@stagingserver.com -i ~/.ssh/server-staging

In order to be able to connect directly, without specifying the key, you can create a .config file for your SSH settings:

# ~/.ssh/config
Host 104.131.202.76 server-staging
  HostName 104.131.202.76
  User deploy
  IdentityFile ~/.ssh/server-staging

Now you can easily log in using the alias server-staging:

$ ssh server-staging

Or, if you prefer, you can also use the IP address, instead of the alias.

For detailed information about SSH config files, have a look here.

5. Run some test commands with Ansible

Now that both the control machine and the server were configured, you can test if your setup is working by running a command like this:

$ ansible all -i ansible/inventories/staging -a "/bin/echo hello"

You should see an output similar to this:

104.131.202.76 | success | rc=0 >>
hello

Note that the command was executed only in the staging server (defined by the inventory). If something goes wrong, increase the verbosity by appending -vvvvv in the command.

6. Run your playbook

If you got a "success" output with the test command, then you should be ready to execute a playbook in the staging server:

$ ansible-playbook -i ansible/inventories/staging ansible/provision.yml

TL;DR

This post covered how you can configure a remote server and a local environment to have a functional multistaging environment controlled by Ansible, using Vagrant locally for development. This post targets people who already use Vagrant with Ansible, and would like to reuse the current provisioning for deployment in staging / production servers.

comments powered by Disqus