Part 1: Exploring a localhost development story powered by Docker

The Google Appengine Development Server is a great tool for being able to quickly and easily simulate the appengine sandbox. However, when you want to simulate your full cloud environment or portions of it complexity rules. Let’s take a look at a greatly simplified visualization of this.

We have an underlying infrastructure cloud project (appengine / compute engine) which powers many of our user facing web applications. These user facing applications (appengine) have varying layers of interconnectedness with each other and googles serving infrastructure. Also, becoming an important part of our stack is our Elastic Search cluster which is containerized and hosted within Compute Engine.

It takes a fair bit of effort to maintain a local development story. We need docker up and running for Elastic Search and it’s related nginx proxy. We need multiple instances of the development appserver up and running for the actual projects. Currently the local story is brought up and down with a python wrapper which in turn is powered by invoke. While this works….it’s fragile and is already hard to scale as our stack increases in complexity and as we continue grow our team.

There has to be a better way. Docker to the rescue? We already use Docker Compose to orchestrate the containers for ElasticSearch and it’s nginx companion. Perhaps we can move the dev appserver into our compose file as well?

First things first let’s get the Google Cloud SDK containerized. It turns out that Google has already done this and made it available on docker hub here. That was easy! Now, let’s take a look at what a compose file might look like (again, greatly simplified). Note that we are mounting source code and project specific datasets directly into the container so that we won’t need to rebuild the container for code changes to be reflected.

version: '2'
services:
elasticsearch:
build: ./containers/elasticsearch/elasticsearch
ports:
- "9200:9200"
environment:
- PROJECT_ID=localhost
volumes:
- ./snapshots/:/snapshots/
nginx:
build: ./containers/elasticsearch/nginx
links:
- elasticsearch:localhost
ports:
- "9201:9201"
infrastructure:
build: ./containers/cloud_sdk
command: dev_appserver.py /src/ --host=0.0.0.0 --admin_host=0.0.0.0 --port=8080 --admin_port=9080 --storage_path=/data/
ports:
- "8080:8080"
- "9080:9080"
volumes:
- /infrastructure/src:/src
- /infrastructure/data:/data
project1:
build: ./containers/cloud_sdk
command: dev_appserver.py /src/ --host=0.0.0.0 --admin_host=0.0.0.0 --port=8081 --admin_port=9081 --storage_path=/data/
ports:
- "8081:8081"
- "9081:9081"
volumes:
- /project1/src:/src
- /project1/data:/data
project2:
build: ./containers/cloud_sdk
command: dev_appserver.py /src/ --host=0.0.0.0 --admin_host=0.0.0.0 --port=8082 --admin_port=9082 --storage_path=/data/
ports:
- "8082:8082"
- "9082:9082"
volumes:
- /project2/src:/src
- /project2/data:/data

We can now let docker & docker-compose do the heavy lifting and bring the whole thing up for us!

$ docker-compose up -d

Great success! Our elastic search containers and development app server containers are up and running. Wow…that was easy, let’s start hacking!

Problem 1 (Virtual Box)

WTF!? Changes I’m making aren’t being reflected / seen by dev app server. After some research it turns out that virtual box sendfile does not trigger the inotify file watching mechanism when files are edited on the MacOSX side of the volume mount. If you want to read more on this subject there is a workaround / great write up here.

Ok…so I have a work around, but I don’t like it. Some of the alternatives listed show Vagrant as a possible work around. Let’s give it a shot!

After some trial and error I have a Vagrant file that gets docker up and has an rsync plugin activated that will sync my code to the shared volumes for me.  I ended up using a 3rd party plugin to make rsync work as desired and is available at: https://github.com/smerrill/vagrant-gatling-rsync

# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure(2) do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.

# Every Vagrant development environment requires a box. You can search for
# boxes at https://atlas.hashicorp.com/search.
config.vm.box = "ubuntu/trusty64"
config.vm.provision "docker"

# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false

# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
# config.vm.network "forwarded_port", guest: 80, host: 8080
config.vm.network "forwarded_port", guest: 2375, host: 2375

# Create a private network, which allows host-only access to the machine
# using a specific IP.
config.vm.network "private_network", ip: "192.168.33.10"

config.vm.synced_folder "/", "/vagrant/",
type: "rsync", rsync__exclude: [".git/", ".idea/", "*.pyc", "build/", "node_modules/"], rsync__args: ["--verbose", "--archive", "--delete", "-z"]

# # Configure the window for gatling to coalesce writes.
if Vagrant.has_plugin?("vagrant-gatling-rsync")
config.gatling.latency = 1
config.gatling.time_format = "%H:%M:%S"

# Automatically sync when machines with rsync folders come up.
config.gatling.rsync_on_startup = true
end

# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
config.vm.provider "virtualbox" do |vb|
# Display the VirtualBox GUI when booting the machine
# vb.gui = true

# Customize the amount of memory on the VM:
vb.memory = "4096"
end

config.vm.provision "shell", inline: <> /etc/default/docker
sudo service docker restart
SHELL
end

So, now I’m at a point where I have two “up” commands.

$ vagrant up
$ docker-compose up -d

It works…..kind of….

Problem 2 (It’s slow)

Changes are now being loaded into the containers….but it’s slow….omg it’s too slow to be usable. Now what?
OMG OMG, as timing will have it people are working to make this easier for us. Docker just announced a new product that is going to make all of the above work….I hope…check it out: https://blog.docker.com/2016/03/docker-for-mac-windows-beta/
Will update once I get on the beta.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s