DevOps at ASL19

As the first post in our engineering blog, it’s fitting to provide an overview of how DevOps has evolved at ASL19. This also fits well with the next series of post about our current experience with Kubernetes and is a good chance to see how and why we have ended up here.

1. In the beginning, there was ssh

In my early days at ASL19, the deployment was manual. We had servers on a few providers such as Linode that we would ssh into, set up what was needed to run the project, and either clone/upload the code and build it on the server or upload the artifacts. We had several projects with different stacks so each would require the programmer’s expertise and involvement in setting up the server and deploying the project. Now, in order to keep this post shorter than a book, I’m not going to discuss each part of this journey in depth. I’m hoping that most agree that this is not the right way to run projects, beyond maybe building a quick MVP by yourself. At some point, you need to have an automated method that anyone can run. This could be as simple as a shell script, or at the next level, configuration management software, such as Chef, Puppet, and Ansible.

2. Ansible Reform

I had some experience with setting up a complex system with Ansible before and I found it pretty intuitive and easy to pick up and use. We started using Ansible to automate our deployments and pretty soon most of our projects were being deployed using Ansible and to this day we still have several projects that use Ansible for deployment.

As nice as Ansible is, it still has several disadvantages, to name a few:

  • Writing playbooks is tedious and long, especially if you want to make your commands idempotent. You can use the existing playbooks for some components (say installing a DB or tool), but in reality, they sometimes don’t work out of the box and require tweaking, or don’t play nice with some other playbook.
  • Even if you do make your playbook idempotent, rollbacks or updates can become complicated. The close coupling of host OS, dependencies and the final executable makes the environment messy and brittle.
  • It doesn’t fully replicate different environments so it won’t eliminate the old “but it worked on my machine” problem. You can run your development instance in a VM with something like Vagrant, but that takes time and system resource.

3. Docker Enlightenment

This was around the time that Docker started to make waves in the DevOps world. I had some experience with Docker before, but I thought it is too early to consider introducing it to our stack and kept using it only as a way to run dependencies quickly and easily, for example, bringing up a DB for local development. However, by using Docker, the benefits become more and more apparent and it was clear to see how much easier using Docker is in comparison to configuration management tools. When you consider Docker, you will immediately see the appeal. It would require a separate post to discuss Docker benefits, but just to name a few, Docker is much faster to operate, helps with isolating components and makes deployment much more independent.

4. CI/CD Romanticism

The time of Docker rise also coincided with CI/CD pipelines becoming more commonplace with more and easier solutions being available than Jenkins (let’s be honest, we should thank Jenkins for its service and just let it die). At the time we were using Bitbucket as our repository and after the introduction of Bitbucket Pipelines, we migrated Khoondi, one of our more complicated projects to utilize Pipeline to run tests and create Docker images that were hosted at Docker hub, and eventually, Khoondi ended up the first project we deployed to production using Docker. Interestingly deployment was done using Ansible to setup docker, populate the config files using jinja templates, hardening the server and set up other components, such as Nginx, firewall rules, etc.

Khoondi was a good use case because it also had multiple parts, the application itself had 4 different independent components, 2 running as cron jobs and 2 as web applications. There was also dependencies on 2 other Docker images for proxying and favorite icon extraction. Docker definitely helped us making the deployment easier and more stable and we had no issues with the deployment since launch. This positive experience made us more confident to use Docker later on when opportunities came up.

The switch to Giltab at the beginning of 2018 helped speeding up the process. Gitlab provides most of the tools to set up a modern pipeline out of the box, with Gitlab CI, runners and a registry to host docker images. We started using Docker-based deployment for new projects and slowly moved our active existing projects to this system as well. Aside from a few, currently all of our active projects are running using Docker. This has helped us by making our deployment more uniform and consistent, but there are still some parts that we can certainly improve, and that’s where K8S comes in.

To be continued.