Achieving Docker Tag Happiness, a 6-Year Journey
For years, we at TEN7 have maintained Flight Deck, a set of Docker containers for both local development and production hosting. We’ve maintained it for so long, in fact, that we have PHP 5.6 containers capable of hosting Drupal 6 on Kubernetes.
Over the years, we’ve used a variety of git strategies for the container source code, as well as different tagging approaches on Docker Hub to provide ready-to-use containers. Needless to say, we’ve learned a lot about what works and what doesn’t for developing containers.
One Repo to Rule Them All
The first thing we tried was a mono-repository approach. Unlike a VM or physical server, it is considered bad practice to run multiple “applications” in the same container. If you’re using a conventional LAMP stack, you usually end up with two containers:
- Apache and mod_php
This works because PHP acts more as a library for Apache, than an independent application. If using PHP-FPM, however, you usually end up with three:
- Apache or NGINX
Two or three containers doesn’t seem to be a lot for one repo. Simply make a top-level directory for each:
/path/to/flight-deck ├── apache_php │ └── Dockerfile └── mysql └── Dockerfile
This sounds great from a git perspective, as any change to the entire fleet of containers all goes into the same git history and shares the same tags. Things get sticky though when you set up automated deploys on Docker Hub.
Hub doesn’t care about repos, it only cares about the resulting containers. So, when you set up a new container on Hub, you need to choose a name for each, then point it at a repo with the appropriate configuration.
For our Apache/PHP container, we would at minimum need the following Docker Hub build rule:
And then we would need a separate repository on Hub pointed at the same repository for the Database container:
So, done! Right?
Well, not so fast. While your database container is pretty stable, you may find you need to update your Apache/PHP container much more often. As both containers are now on the same repo, each time you commit a change to fix or update one container, you force a rebuild of the other. Docker Hub has no way to tell which container in the same repo was changed, just that a change happened, so it’s time to rebuild!
In theory, this is fine, right? The builds should always result in the same thing, right? Not so. A container is essentially a file archive with an entire operating system inside it. Ofter, rebuilding a container with no changes to your Dockerfile will result in changes anyways, as the software packages pulled from the OS provider may change. This could create head-scratching problems for devs where “things worked yesterday” or worse, a production problem.
It’s Not You, It’s Me
The obvious solution is to break up the mono-repo into several repos each for one container,
/path/to/flight-deck-web └── apache_php └── Dockerfile /path/to/flight-deck-db └── mysql └── Dockerfile
This is a little annoying as a container developer. Instead of one big repo, you now have two (or more) repos each with their own separate history. While this sounds like a huge concern, in the end it wasn’t a problem at all for us. In fact, separating the repos created several advantages:
- Hub now only rebuilds a container when the container changes, not another in the same fleet.
- History is confined to each individual container.
- You can put container-specific instructions in the README where they’re easier to find.
And this is the approach we used...until Drupal 8 and PHP 7.
Variants, Subdirectories, and Tagging
When we split up the repos, we did so along application lines; that is, Apache/PHP and MySQL. If you’re only ever going to support the most recent version of any of these applications, splitting the repos is sufficient. The complication comes when you need to support multiple versions with different features:
- PHP 5.6 for Drupal 6 sites and Drupal 7 sites in bad need of an update.
- PHP 7.x for Drupal 7 with Drush 8.x
- PHP 7.x for Drupal 8/9 with Drush 10.x
Fortunately, Docker Hub does provide a way to provide variants within the same application, tags. Docker tags come after the container name. When you use docker pull to download a new container with only the name, say
ten7/flight-deck-web, you implicitly pull the latest tag, with the full container name of
We can define custom tags in Docker Hub in the build configuration screen. While there’s no hard convention, it’s typical to use a version number or keyword:
To keep things organized in the repo, we started separating each of these tagged variants in subdirectories:
/path/to/flight-deck-web ├── 5.6 │ └── Dockerfile ├── 5.6-drupal7 │ └── Dockerfile ├── 7.4 │ └── Dockerfile └── 7.4-drupal7 └── Dockerfile
This looks like it’d work just fine, but in reality it re-introduces the problem we had in the first place when we separated the Apache/PHP and MySQL container repos. Now, whenever we make a change in one variant, it rebuilds all of them. When things are broken and you’re waiting on Docker hub to build several “unchanged” containers to get to yours, you’re going to have a very long wait indeed.
It’s Not You, it’s Me, Again
So what’s the solution? What we started doing a year ago was to break up each variant into its own named branch. We retained the subdirectory structure as the Drupal 7 variants of Flight Deck are the same as the ones for Drupal 8/9 only with an older version of Drush, a
5.6.x branch and a
/path/to/flight-deck-web (branch 5.6.x) ├── 5.6 │ └── Dockerfile └── 5.6-drupal7 └── Dockerfile /path/to/flight-deck-web (branch 7.4.x) ├── 7.4 │ └── Dockerfile └── 7.4-drupal7 └── Dockerfile
Then, we configure the build rules on Docker hub to account for the separate branches:
More Variants, More Problems
This strategy does work, but it comes with a ton of subtle issues which make things even more complicated. Switching between vastly different, long-lived branches plays havoc with Git and many IDE tools. If you need to switch to another version, often you cannot merge changes. Instead, you have to copy them over manually or cherry-pick.
Another problem is with testing new containers. No matter how good your testing procedure or framework is, there’s always some subtle difference or incompatibility you didn’t account for. Testing a minor update to a container should be done both locally, and on production-similar hardware. Since there’s no real primary branch in or repository any longer, we need to create parallel develop branches for each variant,
/path/to/flight-deck-web (branch 5.6.x) ├── 5.6 │ └── Dockerfile └── 5.6-drupal7 └── Dockerfile /path/to/flight-deck-web (branch 5.6.x-develop) ├── 5.6 │ └── Dockerfile └── 5.6-drupal7 └── Dockerfile /path/to/flight-deck-web (branch 7.4.x) ├── 7.4 │ └── Dockerfile └── 7.4-drupal7 └── Dockerfile /path/to/flight-deck-web (branch 7.4.x-develop) ├── 7.4 │ └── Dockerfile └── 7.4-drupal7 └── Dockerfile
And then parallel build rules in Docker Hub:
As you can guess, this gets really messy and hard to manage. And then there’s one last problem we need to solve…
“Can We Go Back to the Old Version?”
All of the above still has one huge, glaring problem. Even with development branches and thorough testing, some issues will still get through, and cause production issues. It could be due to a minor change in Node, a PHP library, or behavior of the container startup script. The fact is, you can’t know with 100% certainty until you put it in prod and find out. And when you do find out, you now need to race to fix the issue because there’s no way to go back to an older version of the container with the same variant tag -- the build rules replace them by default.
I stalled on this problem for a very long time. We tried to combat it operationally by phasing in the containers on production sites slowly by deploying the develop versions, correcting issues when we find them. It worked, but the pacing was slow and hampered turning out updates (like Composer 2) faster.
Then I realized the solution was in front of me the entire time.
A support container we maintain for our Kubernetes based hosting is ten7/flight-deck-util. This small container is a customized version of Alpine Linux, with key utilities and Ansible roles to facilitate deployments. It has no “outside” application to speak of, so it made sense to use a Git Flow-like branching strategy with a
/path/to/flight-deck-util (branch main) └── 1.x └── Dockerfile /path/to/flight-deck-util (branch develop) └── 1.x └── Dockerfile
History suggested we keep the subdirectories just in case we needed closely related variants on the same branch, but this hasn’t been necessary so far.
The build rules on Docker Hub were also really simple:
Just one for main, one for develop. Easy! Yet, there was always the possibility that something would go wrong with any updates to the container. The k8s integration with Ansible is finicky and has broken several times in the past. With
ten7/flight-deck-util, that could result in a company-wide block on deployments, which would, understandably, be Very Bad.
After some research, I found out you can instruct Docker Hub to auto-create Docker tags based on git tags. One more extra rule...
...and now we have something we never achieved with our Apache/PHP containers. Rollback! If
ten7/flight-deck-util:1.2.7 breaks, we can always go back to
ten7/flight-deck-util:1.2.6 until we fix the problem.
“This is wonderful,” I thought, “but with all the weird branching and variants this would never work for our other containers…” Or can it?
Sometimes a Cigar is Not a Cigar
In several ways, creating PHP version-specific Docker tags was the key issue. It makes sense at first, but the more you keep trying to retain multiple concurrent versions of the supposedly same container, the more the entire model seems to break down entirely. While this is what the official containers provided by Docker do, it doesn’t solve all of the use cases we had for Flight Deck. The ten7/flight-deck-web container in particular isn’t a singular application, but a composite environment with multiple tools and components baked in. Relying on a blunt “5.6” or “7.4” tag is like doing surgery with a kudgel.
The solution we landed on this week was to split the repos one more time, into
ten7/flightdeck-web-7.4, each with a
/path/to/flightdeck-web-5.6 (branch main) ├── 5.6 │ └── Dockerfile └── 5.6-drupal7 └── Dockerfile /path/to/flightdeck-web-5.6 (branch develop) ├── 5.6 │ └── Dockerfile └── 5.6-drupal7 └── Dockerfile /path/to/flightdeck-web-7.4 (branch main) ├── 7.4 │ └── Dockerfile └── 7.4-drupal7 └── Dockerfile /path/to/flightdeck-web-7.4 (branch develop) ├── 7.4 │ └── Dockerfile └── 7.4-drupal7 └── Dockerfile
Notice now that we’re back to using a conventional Git flow branching strategy (main and develop) without any variant specific branches. Furthermore, the PHP version number is now in the repo name itself. When we set up the build rules in Docker Hub, we create a new container for each version number, and only need a limited set of build rules for each:
The above looks like a lot of rules, but they’re the same rules from one Apache/PHP container to another, there’s also a standardized and shorter set of Docker tags:
And now, container version specific tags derived from the git tag:
In the above, the “5.0.2” tag isn’t a PHP version, Node JS version, or any external version, is the version number of the container itself. This allows us to easily rollback by using that container specific tag. Better yet, it supports variants (like “drupal7”) with only one more build rule.
It’s been a very long journey for TEN7 and Docker. We’ve learned a lot about how to maintain a series of containers while supporting vastly different requirements and variants. Often, the solution wasn’t to add more complexity, but to rethink our approach with the aim of simplifying things further.