A lot of companies nowadays use Kubernetes for orchestrating their containers. I thought it would be interesting to talk about the benefits of containerisation - ie. before you even think about adding Kubernetes on top of that.
“Kubernetes” - what a buzzword right? I bet that at first, you had no idea what it was so you had to Google it. Once upon a time, I Googled it too. :) The definition states: “Kubernetes is a tool for orchestrating your containers” But what are the containers? - you asked.
To put it in very simple words a container is a logical “package”, that contains everything that an application needs in order to function (e.g. the app itself, its dependencies, libraries and configuration files).
In fact, containers were a natural result of an evolution cycle:
The containers are therefore completely independent of the environment in which they run. This practically means that you can take a container with your application and run it seamlessly in any environment - AWS, GCP, Azure, your data centre or on your laptop. This brings a lot of benefits, as developers can focus on application development and (ideally) don't have to worry about where and how their applications are run.
As mentioned above, the power of containers rests in the fact, that they can be run anywhere. The usage of so-called hybrid infrastructure setups is very popular today. This practically means that your applications are running partially in your private data centre and partially on the public cloud. Currently, there is no better solution to hybrid infrastructure design than to "containerize" your applications. Google recently launched Anthos - a new hybrid and multi-cloud solution, which lets you build and manage modern hybrid applications across environments. Now you can have a single interface to run containers anywhere.
Your Ops (DevOps) teams will also enjoy the containers. They will no longer have to deal with finding the right place to put the container. In fact, a developer can easily put the container even on his own laptop (Sandbox).
Containers are small compared to conventional virtual machines (mostly - obviously if I pack a huge monolith into a single container, the container will also be large, but this isn't the right approach) and require much fewer resources such as memory or CPU. As a result, you can use your physical resources much more efficiently - you can stack containers on one server in a smart way, making the most out of the available resources and having the minimum of bare metal just laying around without being actually used.
When I mention agility and speed, I mean two things: the agility and speed of the containers themselves, and the agility and speed of the people working with the containers.
It all helps you achieve faster and more efficient development, faster time-to-market, with much easier application operation, and a more stable infrastructure that is able to immediately respond to frequent changes - whether we mean frequent releases of new versions of the application or fast reaction to traffic changes.
Containers have been adopted by the biggest global players such as Google, Netflix or Twitter - and that really says it all.
Finally, I would like to note that the strength of containers grows with:
As a result, you probably won't stay with just one container and you need a solution to manage/orchestrate them. And that's how we get back to Kubernetes, which is a solution that will help you do that. Learn more about Kubernetes in my previous article.