Simple Ways to Start Using Docker


Interested in simple ways to start using Docker from the bottoms-up?  This article presents 4 patterns for using Docker as an individual or small team from dev through production.  These patterns address difficulty:

  • trying new tools and approaches
  • configuring and updating toolchains for technical computing environments
  • developing and deploying infrastructure services

First a quick-reminder of the basic benefits of containerization:

  • isolation – isolate application processes, filesystem & network dependencies; limit and prioritize memory, cpu, network resource usage
  • packaging –  package applications, their dependencies, and metadata in a format that is easy to distribute and manage

 

These isolation and packaging features make it simple to build, ship, and run many different kinds of applications on the same host — especially when those applications are weird or troublesome.

Overview of Containerization - Funny

Patterns for Using Docker

There are many patterns for using and deploying containerized applications.  This article will focus on the patterns especially good for learning to build and run containerized applications in a low-risk way:

  1. Exploratory Sandbox
  2. Packaged Tool
  3. Packaged Environment
  4. Packaged Service

These patterns can be applied in numerous operational contexts and organizational scopes — a sample:

 

Pattern: Exploratory Sandbox

 

Problem: Trying new tools and approaches is hindered by risk of breaking or polluting a working environment

Solution: Experiment with new tools and approaches in sandbox

Operational Context: Development, Operations

Scope: Individual

 

Exploratory Sandbox

Execution Mode:

  1. one-off

Examples:

  • try new software
  • try risky commands

Description: The Exploratory Sandbox pattern starts a shell inside a base operating environment similar to one you usually use for the purpose of experimentation.  Optionally provide environment variables and/or data to the container via a read-only volume.

The Exploratory Sandbox pattern is a great place to start using containers because not only can you perform experimental or risky work, but also become accustomed to the power of the isolation features provided by containers.  The canonical example of using an exploratory sandbox is to demonstrate the filesystem isolation feature of containers by starting a new container and then removing a system directory to see what happens — clearly a ‘risky’ operation:

Start a fresh container with a bash shell:

# important note: notice there are no volumes mounted with -v 
# which would mount external data into the container and make it susceptible to destruction 
yourhost$ docker run -h isolation --rm -it centos:7.2.1511 bash

Double-check you are inside the container, as you are about to destroy the filesystem! Your shell prompt should look like:

[[email protected] /]#

Now…on with the destruction!

# list files in /usr/bin to prove this is a normal CentOS installation
[[email protected] /]# ls /usr/bin

# remove all files in /usr/bin!
[[email protected] /]# rm -rf /usr/bin

# try listing files again
[[email protected] /]# ls /usr/bin

The second listing of /usr/bin should result in an error:  bash: ls: command not found because the ls program has been removed.

Type exit to leave the broken container.

Now to prove nothing on the host has been affected start a new container and list /usr/bin again:

yourhost$ docker run -h isolation --rm -it centos:7.2.1511 bash
# list files in fresh container
[[email protected] /]# ls /usr/bin
... snip output ...

Explore with safety & confidence!

 

Pattern: Packaged Tool

 

Problem: Many tools are complicated to install, configure, maintain, or conflict with other tools

Solution: Package a single tool for use as a direct replacement of a locally installed binary

Operational Context: development, operations, training

Organizational Scope: individual, team, organization

 

Packaged Tool

Execution mode:

  1. one-off execution

Examples:

  • mvn
  • nodejs
  • tcpdump
  • aws-cli
  • inspec

Description: The Packaged Tool pattern packages a single tool inside a Docker image and invoked via the ENTRYPOINT for use as a direct replacement of a locally installed binary.  Optionally provide environment variables and/or data to the container via a volume.

The Packaged Tool pattern is great for managing widely used tools on development machines, especially those that execute in interpreted runtimes such as Ruby, Python, and Node.js.  Let’s walk through an example of installing and using the aws-cli via this pattern.

The qualimente/aws-cli Dockerfile simply installs the aws command-line tools and invokes aws at container start-time via the ENTRYPOINT:

FROM centos:7.2.1511

RUN yum -y install epel-release

ENV PACKAGE_DEPS='python2-pip jq groff'
RUN yum -y update \
  && yum -y install $PACKAGE_DEPS \
  && yum clean all

RUN pip install --upgrade awscli

VOLUME /work
WORKDIR /work
ENTRYPOINT ["aws"]

Export AWS credential & region configuration as environment variables:

export AWS_ACCESS_KEY_ID="AWS_ACCESS_KEY"
export AWS_SECRET_ACCESS_KEY="AWS_SECRET"
export AWS_DEFAULT_REGION="desired-aws-region"

Export a shell alias for aws that runs the aws-cli image, providing the AWS environment variables and mounting the current directory as /work inside the container:

alias aws='docker run --rm -it -e "AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}" 
    -e "AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}"
    -e "AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}"
    -v "$(pwd):/work" 
    qualimente/aws-cli:1.11.28'

Use of specific environment variables and specific image version create a narrow support interface.


The aws-cli image’s entrypoint command is aws, so you may use the image as a drop-in replacement on your system, e.g. to list the account’s EC2 instances:

your-host$ aws ec2 describe-instances
{
 "Reservations": []
}

 

Pattern: Packaged Environment

 

Problem: Technical computing environments rely on many tools that are difficult to get working-together; accumulating and compounding complexity of each tool

Solution: Package a complete, tested environment to replace locally installed binaries and configurations

Operational Context: development, operations, training, production

Organizational Scope: team, organization

Alternative to: Vagrant

 

Packaged Environment

Execution mode:

  1. long-lived shell
  2. one-off execution

Examples:

  • software development
  • infrastructure development and management
  • data analysis
  • security analysis
  • technical training

Description: The Packaged Environment packages a complete, tested environment to replace locally and often manually-installed binaries and configurations.  Optionally provide environment variables and/or data to the container via a volume.

The Packaged Environment pattern is great for managing software development, infrastructure development, and training environments which are full of tools that need specific versions and whose correctness is critical for efficient workflow and support.  These environments can be built and maintained for a single individual or a large organization.  Creating Packaged Environments is usually straightforward and mostly entails identifying the environment’s dependencies.

As an example, I created a Packaged Environment for the Udacity Networking for Web Developers course recently so that I could share it with others (skuenzli/docker-udacity-networking Dockerfile):

FROM ubuntu:16.04

ENV PACKAGES='netcat-openbsd tcpdump traceroute mtr net-tools iproute2 iputils-ping dnsutils man lsof python'

RUN apt-get update \
  && apt-get upgrade -y \
  && DEBIAN_FRONTEND=noninteractive apt-get install -y $PACKAGES \
  && apt-get clean  && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

ENV PS1='Linux $ '

You can build the image if you like, but it’s simpler and more-deterministic to run the tagged version from Docker Hub:

your-host$ docker run -h networking-course --rm -it --name udacity_networking skuenzli/udacity-networking:2017-01-01

Now we can get-on with the important matter of working through networking exercises!

Let’s perform one of my favorites, which is speaking HTTP directly to Udacity’s webserver:

[email protected]:/# printf 'HEAD / HTTP/1.1\r\nHost: www.udacity.com\r\n\r\n' | nc www.udacity.com 80
HTTP/1.1 302 Found
Cache-Control: no-cache
Content-length: 0
Location: https://www.udacity.com/
Connection: Close

 

Pattern: Packaged Service

 

Problem: Difficult to develop and deploy cross-cutting infra services; susceptible to resource usage problems

Solution: Deploy a complete, tested application to replace locally installed binaries and configurations

Operational Context: test, production

Scope: team, organization

 

Packaged Service

Interaction mode:

  1. long-lived service

Examples:

  • logging: logstash, fluentd
  • monitoring: collectd, datadog

Description: The Packaged Service pattern packages a single service inside a Docker image and invokes it via ENTRYPOINT for use as a direct replacement of a locally installed binary.  Optionally provide the container with:

  • command-line options via CMD
  • environment variables
  • data  via a volume
  • network traffic by exposing ports

Packaged Service is almost identical to Packaged Tool technically, but is very-different semantically.  The Packaged Service allows a function-oriented team within an organization such as the Monitoring or Logging team to develop, test, and deploy a service integration point to ship telemetry from a variety of hosts into centralized services under their control.  The function inside the Packaged Service is:

  • developed and deployed with a different lifecycle than the underlying infrastructure below or application above
  • isolated from other applications, particularly via memory and cpu resource limits

Effort required to deploy the Packaged Service via popular Configuration Management tools is typically low due to good support for Docker and Docker Compose.

Logstash is a log-shipping service often packaged and deployed in this way, let’s look at an example managed with docker-compose:

version: '2'

services:
  # configure logstash via the logstash.conf file in this dirctory
  logstash:
    image: logstash:2.3.4-1
    command: --allow-env -f /logstash/config/logstash.conf
    ports:
      - "5000:5000/tcp"
      - "5000:5000/udp"
    environment:
      ES_HOSTS: 'elasticsearch:9200'
    # limit amount of memory logstash may use - it can be quite greedy
    mem_limit: 2048m
    # restart logstash if it crashes
    restart: unless-stopped

The logstash service:

  • runs version the official logstash image, version 2.3.4-1
  • consumes the config at /logstash/config/logstash.conf and allows environment variable substitution
  • is limited to using 2048m of memory for the entire container
  • is restarted in the event of crashes

Please see the logstash example for more details and demo steps.  The suggested takeaway is that the logging function can be decoupled from the rest of the organization’s infrastructure and applications, providing engineering efficiency benefits to the logging team while permitting safer experimentation with containerization.

Summary

These four patterns for using Docker are low-risk ways to improve you and your team’s efficiency as well as introduce containerization to the environment in a low effort, low risk way.  Please try these patterns out and let us know how they work for you via Twitter (@qualimente).

If you’d like to learn more about how containerization works, please join us for an expert-led Fundamentals of Docker for Engineers workshop!

No upcoming events at this time.

Comments