Monday, May 16, 2016

First Impresseions of Docker for Mac Beta

While this isn't exactly fresh news anymore, I believe this tool is awesome enough that we should dig into it. Note, that the Docker for Mac Beta is still closed however, they seem to be rolling it out fairly quickly. If you haven't done so yet, you can sign up here: https://beta.docker.com/

Why is it awesome?


There are a couple of reasons:
  1. First and foremost there is no longer a VM in the way of interacting with the Docker Daemon. Well...that isn't quite true as the Docker Daemon is running within a very thin Alpine Linux VM via an xyhve hypervisor. However, it is very transparent and so far in terms of file mount speed and general usability, boot2docker / vagrant based solutions are dead to me.
  2. It really is a native Mac OSX application. As you will see below it is very easy to download and install the .dmg and it has no external dependencies that are required. Oh....and it has automatic updates baked right into it. 
  3. It only takes few minutes to have Docker for Mac beta app downloaded and installed and it just fits right into the development workflow. 

Download and Install “Docker for Mac” 


When you're selected for the beta you will receive an email that includes a link to the download page and a key that will allow you to activate your beta. Once you have it download you can drag the Docker for Mac beta into your applications folder.


Now that you have installed the “Docker beta” app you can start it up. Upon first usage you will be prompted to enter your invite token. This is part of the welcome email you would have received. 


There are a few other prompts but I won't bore you with minor details. In very short order you will have Docker up and running!



Using “Docker for Mac” the first time


To use Docker open up the terminal app and let's check that docker is running with a version check and just to make sure I'm not tricking you let's prove that it isn't a docker-machine backed VM that is exposing the Docker Daemon:


Making configuration changes 


There is a handy settings option that is available from the application which allows some basic tweaking to allotted cpu / memory.


There is also a very nice CLI that allows you to interact with the Docker app. These are exposed by the "pinata" command.



I would imagine over time more of these configuration options will make it into the app settings page, but for now we can easily make changes via the command line. 







Tuesday, April 5, 2016

Part 2: Using X11 forwarding with a localhost development story powered by Docker

In discussion with a co-worker regarding the localhost development conundrum he asked if I had tried X11 forwarding. I promptly replied no and we briefly discussed the potential impact this may have.

What could this mean for the localhost story? 


Well....the problems that were encountered in the first iteration of this, in theory should no longer exist.
  1. File change events would be properly raised as we would be natively editing the file on the VM
  2. We wouldn't have to wait for files to sync'd between OSX and the VM and then having to wait for the file change events to occur.
A visual recap of the environment I ended up with from the first iteration looks like this:


Vagrant was leveraged to provision the VirtualBox VM with Ubuntu and handle the file syncing. Docker Compose was used to bring up the containers and I was making source modifications within my IDE. 

This only changes slightly as we go down the X11 route of things as we would simply be moving the IDE into the provisioned VM:


First things first, we need to install XQuartz on your mac. After that let's ssh into the VM and install / run PyCharm (note the flags which enable X11 forwarding). 

ssh -XC vagrant@192.168.33.10

mkdir -p ~/opt/packages/pycharm
cd ~/opt/packages/pycharm
wget https://download.jetbrains.com/python/pycharm-professional-2016.1.1.tar.gz
gzip -dc pycharm-professional-2016.1.tar.gz | tar xf -
ln -s ~/opt/packages/pycharm/pycharm-2016.1/ ~/opt/pycharm
# Install zentity (required by pycharm
sudo apt-get update
sudo apt-get install zenity
# Start PyCharm
~/opt/pycharm/bin/pycharm.sh

In terms of look and feel it's "almost" like you're working natively however.....it's just a little awkward. I think that's the best word to describe it. There are some minor graphical issues and the window sizing that I've come to expect when working with a native app just doesn't work the same.

In terms of speed it's still a little sluggish. Much faster than relying upon rsync but still not quite good enough IMO.


It was a worthy shot, but just not one that lends itself to a repeatable / maintainable environment. 

Sunday, March 27, 2016

Part 1: Exploring a localhost development story powered by Docker

The Google Appengine Development Server is a great tool for being able to quickly and easily simulate the appengine sandbox. However, when you want to simulate your full cloud environment or portions of it complexity rules. Let's take a look at a greatly simplified visualization of this.


We have an underlying infrastructure cloud project (appengine / compute engine) which powers many of our user facing web applications. These user facing applications (appengine) have varying layers of interconnectedness with each other and googles serving infrastructure. Also, becoming an important part of our stack is our Elastic Search cluster which is containerized and hosted within Compute Engine.

It takes a fair bit of effort to maintain a local development story. We need docker up and running for Elastic Search and it's related nginx proxy. We need multiple instances of the development appserver up and running for the actual projects. Currently the local story is brought up and down with a python wrapper which in turn is powered by invoke. While this works....it's fragile and is already hard to scale as our stack increases in complexity and as we continue grow our team.

There has to be a better way. Docker to the rescue? We already use Docker Compose to orchestrate the containers for ElasticSearch and it's nginx companion. Perhaps we can move the dev appserver into our compose file as well?

First things first let's get the Google Cloud SDK containerized. It turns out that Google has already done this and made it available on docker hub here. That was easy! Now, let's take a look at what a compose file might look like (again, greatly simplified). Note that we are mounting source code and project specific datasets directly into the container so that we won't need to rebuild the container for code changes to be reflected.

version: '2'
services:
  elasticsearch:
    build: ./containers/elasticsearch/elasticsearch
    ports:
      - "9200:9200"
    environment:
      - PROJECT_ID=localhost
    volumes:
      - ./snapshots/:/snapshots/
  nginx:
    build: ./containers/elasticsearch/nginx
    links:
      - elasticsearch:localhost
    ports:
      - "9201:9201"
  infrastructure:
    build: ./containers/cloud_sdk
    command: dev_appserver.py /src/ --host=0.0.0.0 --admin_host=0.0.0.0 --port=8080 --admin_port=9080 --storage_path=/data/
    ports:
      - "8080:8080"
      - "9080:9080" 
    volumes:
      - /infrastructure/src:/src
      - /infrastructure/data:/data 
  project1:
    build: ./containers/cloud_sdk
    command: dev_appserver.py /src/ --host=0.0.0.0 --admin_host=0.0.0.0 --port=8081 --admin_port=9081 --storage_path=/data/  
    ports:
      - "8081:8081"
      - "9081:9081" 
    volumes:
      - /project1/src:/src
      - /project1/data:/data 
  project2:
    build: ./containers/cloud_sdk
    command: dev_appserver.py /src/ --host=0.0.0.0 --admin_host=0.0.0.0 --port=8082 --admin_port=9082 --storage_path=/data/
    ports:
      - "8082:8082"
      - "9082:9082" 
    volumes:
      - /project2/src:/src
      - /project2/data:/data    

We can now let docker & docker-compose do the heavy lifting and bring the whole thing up for us!

$ docker-compose up -d

Great success! Our elastic search containers and development app server containers are up and running. Wow...that was easy, let's start hacking!

Problem 1 (Virtual Box)

WTF!? Changes I'm making aren't being reflected / seen by dev app server. After some research it turns out that virtual box sendfile does not trigger the inotify file watching mechanism when files are edited on the MacOSX side of the volume mount. If you want to read more on this subject there is a workaround / great write up here.

Ok...so I have a work around, but I don't like it. Some of the alternatives listed show Vagrant as a possible work around. Let's give it a shot!

After some trial and error I have a Vagrant file that gets docker up and has an rsync plugin activated that will sync my code to the shared volumes for me.  I ended up using a 3rd party plugin to make rsync work as desired and is available at: https://github.com/smerrill/vagrant-gatling-rsync

# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure(2) do |config|
  # The most common configuration options are documented and commented below.
  # For a complete reference, please see the online documentation at
  # https://docs.vagrantup.com.

  # Every Vagrant development environment requires a box. You can search for
  # boxes at https://atlas.hashicorp.com/search.
  config.vm.box = "ubuntu/trusty64"
  config.vm.provision "docker"  

  # Disable automatic box update checking. If you disable this, then
  # boxes will only be checked for updates when the user runs
  # `vagrant box outdated`. This is not recommended.
  # config.vm.box_check_update = false

  # Create a forwarded port mapping which allows access to a specific port
  # within the machine from a port on the host machine. In the example below,
  # accessing "localhost:8080" will access port 80 on the guest machine.
  # config.vm.network "forwarded_port", guest: 80, host: 8080
  config.vm.network "forwarded_port", guest: 2375, host: 2375
  
  # Create a private network, which allows host-only access to the machine
  # using a specific IP.
  config.vm.network "private_network", ip: "192.168.33.10"

  config.vm.synced_folder "/", "/vagrant/", 
    type: "rsync", rsync__exclude: [".git/", ".idea/", "*.pyc", "build/", "node_modules/"], rsync__args: ["--verbose", "--archive", "--delete", "-z"]

  # # Configure the window for gatling to coalesce writes.
  if Vagrant.has_plugin?("vagrant-gatling-rsync")
    config.gatling.latency = 1
    config.gatling.time_format = "%H:%M:%S"

    # Automatically sync when machines with rsync folders come up.
    config.gatling.rsync_on_startup = true    
  end

  # Provider-specific configuration so you can fine-tune various
  # backing providers for Vagrant. These expose provider-specific options.
  # Example for VirtualBox:
  #
  config.vm.provider "virtualbox" do |vb|
    # Display the VirtualBox GUI when booting the machine
    # vb.gui = true
  
    # Customize the amount of memory on the VM:
    vb.memory = "4096"
  end

  config.vm.provision "shell", inline: <<-shell echo="" sudo="" tcp:="">> /etc/default/docker
    sudo service docker restart
  SHELL
end
So, now I'm at a point where I have two "up" commands.

$ vagrant up
$ docker-compose up -d

It works.....kind of....

Problem 2 (It's slow)

Changes are now being loaded into the containers....but it's slow....omg it's too slow to be usable. Now what?

OMG OMG, as timing will have it people are working to make this easier for us. Docker just announced a new product that is going to make all of the above work....I hope...check it out: https://blog.docker.com/2016/03/docker-for-mac-windows-beta/

Will update once I get on the beta.