Containerized apps

Doing things that previously took 0.5 computers using 0.4 computers

These are rapidly evolving standards. Check the timestamps on any advice.

A lighter, hipper alternative to virtual machines, which, AFAICT, attempts to make provisioning services more like installing an app than building a machine, because the (recommended) unit of packaging is apps or services, rather than machines. This emphasis leads to somehow even more webinars. Related to sandboxing, in that containerized apps are sandboxed even sometimes conflicting. Containerization targets are typically intended to be light-weight reproducible execution environments for some program which might be deployed in the cloud.

I care about this because I hope to use it for practical cloud computing in a reproducible research as part of my machine learning best practice, athe document is biased accordingly.

Most containerization systems in practice comprise several moving parts bolted together. They provide systems for building reproducible sets of software dependencies to execute some desired process in a controlled and not-crazy environment, separate from your other apps. There is a built in system for making recipes for other people/machines to duplicate what you are doing exactly, or at least, close enough.

The most common hosts for containers are, or were, Linux-ish, but I believe there are also Windows/macOS solutions these days, and that they run VMs to emulate a container-capable host OS. Or something?

Learn more from Julia Evans’s How Containers work.

When you first start using containers, they seem really weird. Is it a process? A virtual machine? What’s a container image? Why isn’t the networking working? I’m on a Mac but it’s somehow running Linux sort of? And now it’s written a million files to my home directory as root? What’s HAPPENING?

And you’re not wrong. Containers seem weird because they ARE really weird. They’re not just one thing, they’re what you get when you glue together 6 different features that were mostly designed to work together but have a bunch of confusing edge cases.

Go get Julia Evans’s How Containers work for an introduction

As a tool for reproducible research

Docker may not be the ultimate tool for reproducible research but it is a start.

Notably there are two major standards of interest in this domain, Docker, and Singularity. Docker is very popular on the wider internet but Singularity is more popular for HPC. Docker is cross-platform and easy to run on your personal machine, while singularity is linux-only (i.e. if you want to use singularity from not-linux, install a linux VM.) Singularity can run many docker images (all?), so for some purposes you may as well just assume “Docker” and bear in mind that you might use Singularity as an implementation detail. 🤞

A challenge here is that most containerization tutorials are about how to deploy a webapp for my unicorn dotcom startup, which is how they get features on Hacker News. That is useless to me. I want to Do Science™️. For some reason most of the tutorials bury the lede.

Here are some more pragmatically useful ones. The thing which confused the crap out of me starting out is “what is the ‘inner loop’ of containerized workflow for science?”. I don't care about all this stuff about deploying vast swarms of Postgresql servers or whatever, or finding someone else’s pre-rolled docker image; I need to develop some research. How do I get my code into one of these containers? (and why do so few tutorials start from there?) How do I get my data in there? How does the blasted thing get onto the cluster?

(TODO: rank in some pedagogic order.)

Tiffany Timbers gives a brisk run-through for academics. Jon Zelner goes in-depth with R in a series culminating in continuous integration for science. See Keunwoo Choi’s guide for researchers by example. The Alan Turing Institute containerization guide is an excellent introduction to how to use containers for reproducible science. Timothy Ko shows off a minimal python app development workflow cycle. Pawsey containerisation class (possibly based off software carpentry?). Alan Turing institute Docker intro.


The most common way of doing this; so common that it stands in for all the technologies. It is simple structurally but is riven with confusing analogies, inconsistent terminology and poor explanation that make it seem esoteric.

Fortunately we have Julia Evans who explains at least the filesystem, overlayfs by example. the google best practice page also has good illustrations which make it clear what is going on. See also the docker cheat sheet, as noted by digithead, who also explains the annoying terminology:

Docker terminology has spawned some confusion. For instance: images vs. containers and registry vs. repository. Luckily, there's help, for example this stack-overflow post by a brilliant, but under-appreciated, hacker on the difference between images and containers.

  • Registry - a service that stores image repositories
  • Repository - a set of Docker images, usually versions of the same application
  • Image - an immutable snapshot of a running container. An image consists of layers of file system changes stacked up on top of a base image.
  • Container - a runtime instance of an image

Essentially with Docker you provide a recipe for building a reproducible execution environment and the infrastructure will ensure that environment exists for your program. The recipe is ideally encapsulated in the Dockerfile. The costs of this is that it is somewhat more clunky to set things up. The benefit is that setting things up the second time and all subsequent times is in principle effortless and portable.


NB there is a GUI for all this called Dock station which might make some things more transparent.


Installing docker is easy. Do not forget to give yourself permission to actually run docker:

sudo groupadd docker
sudo usermod -aG docker $USER


Choose one:

Docker with GPU

Annoying, last time I tried and required manual patching so intrusive that it was easier not to use Docker at all. Maybe better now? I’m not doing this at the moment, and the terrain is shifting. The currently least-awful hack could be simple. Or, not.

This might be an advantage of singularity.

Opaque timeout error

Do you get the following error?

Error response from daemon: Get
net/http: request canceled while waiting for connection
(Client.Timeout exceeded while awaiting headers)

According to thaJeztah, the solution is to use google DNS for Docker (or presumably some other non-awful DNS). You can set this by providing a JSON configuration in the preference panel (under daemon -> advanced), e.g.

{ "dns": [ "", "" ]}


rocker has recipes for r docker.

## command-line R
docker run --rm -ti rocker/r-base
## Rstudio
docker run -e PASSWORD=yourpassword --rm -p 8787:8787 rocker/rstudio
# now browse to localhost:8787. L


Singularity promises potentially useful container infrastructure.

A USP is that singularity containers do not need root privileges which means that they are easy to put on clusters full of incompetent dingleberries which is possibly every cluster that is silly enough to let me on.

Singularity provides a single universal on-ramp from the laptop, to HPC, to cloud.

Users of singularity can build applications on their desktops and run hundreds or thousands of instances—without change—on any public cloud.

Features include:

  • Support for data-intensive workloads—The elegance of Singularity’s architecture bridges the gap between HPC and AI, deep learning/machine learning, and predictive analytics.
  • A secure, single-file-based container format—Cryptographic signatures ensure trusted, reproducible, and validated software environments during runtime and at rest.
  • Extreme mobility—Use standard file and object copy tools to transport, share, or distribute a Singularity container. Any endpoint with Singularity installed can run the container.
  • Compatibility—Designed to support complex architectures and workflows, Singularity is easily adaptable to almost any environment.
  • Simplicity—If you can use Linux®, you can use Singularity.
  • Security—Singularity blocks privilege escalation inside containers by using an immutable single-file container format that can be cryptographically signed and verified.
  • User groups—Join the knowledgeable communities via GitHub, Google Groups, or in the Slack community channel.
  • Enterprise-grade features—Leverage SingularityPRO’s Container Library, Remote Builder, and expanded ecosystem of resources. […]

Released in 2016, Singularity is an open source-based container platform designed for scientific and high-performance computing (HPC) environments. Used by more than 25,000 top academic, government, and enterprise users, Singularity is installed on more than 3 million cores and trusted to run over a million jobs each day.

In addition to enabling greater control over the IT environment, Singularity also supports Bring Your Own Environment (BYOE)—where entire Singularity environments can be transported between computational resources (e.g., users’ PCs) with reproducibility.


LXC is another containerization standard. Because docker is a de facto default, let’s look at this in terms of docker.


Kubernetes is a large scale container automation system. I don’t need kubernetes since I am not in a team with 500 engineers.