Why Kubernetes?

Kubernetes is currently a hot topic with lots of arguments for or against using it in smaller scales. The opponents argue that Kubernetes (or K8S in short) brings a complexity that is not needed for most use cases and that the traditional methods are still the better way for most people. Some proponents have unfortunately fallen into the new shiny technology trap and are taking the use case to extremes that are definitely not advisable and counterproductive. After working in the domain for many years and seeing technologies come and go, I believe that in most cases there is no universally right or wrong tech, it just depends on use cases. In this series of posts, I’ll try to share some of our use cases and see if K8S is a good or bad fit.

I’m going to assume that reader is familiar with what Kubernetes (K8s) is and does and move on to why it’s a good fit for our organization and how it can help with some aspects of our system. This is also not comparing K8s with other container orchestration solutions like Nomad, Mesos, etc.

1.We currently have more than 20 projects where each one could have multiple components and ideally, each has at least a staging and a production, which means we need to manage a large number of servers. From the start of a project, this includes setting up the server on AWS, with VPCs, firewalls, hardening, installing required packages (at least Docker), and setting up the deployment requirements such as docker-compose files and environment variables manually on the servers. Then comes post-install tasks such as Nginx for reverse proxy and setting up SSL certs. Some of this can be improved by using scripts or solutions like Terraform, Cloud Formation, Ansible, etc. but the work for managing the number of servers remains the same. This goes on to maintenance as well, all those servers need frequent update and potentially require rebooting once in a while, which in some cases could make the application unavailable during that time.

K8s allows treating resources as a pool and removes the individual server mentality and system. We allocate a number of servers to our cluster and to a large degree eliminate the per server work that was needed (there might be some complications making this work with legacy apps, for example, connecting to an existing RDS in a private VPC).

K8s will also help with other repetitive tasks, such as automatic DNS and certificate management, resource monitoring (through Prometheus) and log collection (ELK). Since we are no longer managing individual servers, the maintenance tasks for servers are also eliminated.

We started experimenting with K8s for our staging applications a while back and for a great overview of what we did, you can check out this post by Lexi, our DevOps intern who did an absolutely amazing job on this project in the past few months.

2.This manual server management also makes it more resistant to change. For example, if we’re not happy with the specs and need more (or less) memory, the amount of work is not trivial. In practice, we usually end up overshooting the specs just in case or live with the restrictions.

K8s pretty much eliminates this issue and allows us to easily experiment with resources. We can start with a small spec and increase as needed by just changing a couple of lines in deployment config and applying the change. As long as a resource needed does not exceed the biggest server specs, K8s will take care of the rest.

3.Looking at resources as a universal pool also provides a lot of benefits especially in the finance department. Currently, it’s pretty hard to predict our server requirements and variations and therefore long term planning with reserved instances are harder to do and the flexible reserved instances are a lot more expensive. With K8s, we can safely reserve the total spec of our current infrastructure for a long time and save quite a bit of money. This can also be taken to the next level by using spot instances as needed.

4.While this has not been a concern for us so far, K8s brings the possibility of scaling as needed. We are just starting to use this feature for our Outline project, allowing to scale up and down as needed based on demand. This is practically impossible without containers and K8s. With K8s and proper setup of your application, we can just change and apply our configs and wait for K8s to bring up additional instances and take care of load balancing and service discovery. Or going one step further, by defining rules allow K8s to do this seamlessly based on traffic or number of users.

Arguments against using K8s

Let’s discuss some of the more popular reasons people use against K8s.

  • The old ways are working fine.

One of the regular arguments (that applies to most new tech). Sure, the old ways still work. If you have one server or project, you’re not concerned with fast scaling and you know enough sysops to make all that work, then go for it. I would actually say you should do that and don’t spend your time on K8s that early. But if you’re running a lot of projects and servers and/or have a big team, that model will have a lot of disadvantages, some of which we covered in our last post about our progress in DevOps. Managing a lot of servers, dependencies, maintenance, update/rollback, etc. are going to take a lot of time.

  • K8s introduces complexity and requires expertise.

Every new technology requires learning and as a result, brings some complexity. If you will be running your own K8s cluster, this is a very legitimate concern and you should not rush into it. K8s is a complex beast and not easy to maintain by yourself. But if you’ll be using a hosted service, such as GKE or EKS, the learning curve is not that high and comparable with other technologies. Also, in practice, you’ll probably don’t have to deal with it that much anyway. We have RDS instances that have been running for 4 years, with no issue whatsoever. Obviously, this will cost more money, but without the team and expertise to handle running a K8s cluster, this is the best option to get up and running and the money you’ll save through other ways will pay that cost.

Another thing to keep in mind is that K8s has a fixed, one-time operational cost. So the more servers or projects you have, the marginal cost will be lower and you’ll eventually end up with a simpler and more consistent DevOps system.

  • There are other simpler options compared to K8s.

This is more of a looking at alternative solutions argument. There are alternatives to K8s, such as Mesos and HashiCorp’s Nomad. While there are certainly other options, (an aside from a short tinkering with Mesos, I have not used any of them) I think there probably isn’t much to gain from using them and lot to gain from using K8s. K8s is clearly the leader in the field, with lots of contributors and support from community, and a very active and thriving community with projects popping up all over the place. If you are not running your own K8s cluster, there are numerous options in the market for a hosted service, and if you do want to run your own, it’s easier to find candidates who are familiar with K8s and abundant resources to turn to when needed.

In conclusion, I think majority of organizations can benefit from using Kubernetes and the cost and complexities are worth the investment and will pay dividends later on. The truth is that containers are not a temporary technology. At the moment a lot of projects utilize or depend on containers and using them is pretty much inevitable. Most CI tools depend on containers, lots of tools allow (or only provide) running them as containers, and they bring lots of advantages to development which requires a separate post to discuss. At some point not using a new technology is not optional and will put you in a disadvantaged position. And, if you are already using containers as the basis of your DevOps, using Kubernetes is the next logical step and you will be missing out on the advantages and even making your life harder by not using Kubernetes.

Happy DevOpsing!