In the modern IT world, software delivery cycles are getting shorter and shorter, while the size and complexity of applications is increasing all the time. Not only that, but we are now operating in a digital multiverse – software isn’t just running on individual computer end points or on-premise servers anymore, but on a multitude of public clouds such as AWS, Google Cloud Platform and Azure, IaaS private clouds and any number of hybrid combinations of all of these.
For developers, this means there is pressure to code bigger and better programs for more and more environments at an ever-increasing speed. For IT operations teams, it means juggling configurations, rollouts, updates, maintenance, load balancing and more across varied and complex system and network architectures. No wonder agility and efficiency have become buzzwords across the industry.
DevOps is a software production methodology that seeks to unify the development and operational management of software. Instead of having programming teams throw an application over the wall to their colleagues in ops who then have to deal with whatever problems arise in deployment, DevOps seeks to make what goes on when you run an app ‘out in the wild’ part and parcel of the developmental thought process, and vice versa.
The aim of DevOps is therefore to reimagine software production, from concept through development to deployment, as a single streamlined end-to-end process, with everyone, developers, testers and ops team, working together in harmony and automated processes driving speed and efficiency. Through collaboration and rationalised workflows, companies can deliver bigger and better apps for every environment they need on shorter and shorter timescales.
However attractive this vision of lean, agile, continuous production is, making it a reality depends on more than cultural change and how IT teams are organised to work together. There are technical challenges, too. One of these is how to marry application configuration with infrastructure configuration, especially when looking to deploy an app in multiple environments in the cloud.
This is the story of how two technologies, Containers and Kubernetes, have become essential tools for DevOps teams looking to streamline configuration from development through to deployment across multiple platforms, and the value this adds to businesses.
Containers: Accelerating Innovation
Software developers have always faced issues when it comes to making programmes suitable for different platforms and infrastructure environments. Say you want to make an app that can run on Windows, Mac and Linux. In the past you might have had to script three different versions, one for each OS, because Windows, Mac and Linux all interpret and execute code in different ways.
Nowadays, of course, you have mobile iOS and Android to throw into the mix. And it isn’t just operating systems that developers have to worry about. Different IT infrastructures, whether it is running an app on a bare metal service, on a private cloud or one of the various public IaaS services, affect how applications perform in different ways, because of factors like load balancing and routing across different network architectures. This leads to what you might call infrastructure lock-in, with a single script having to be reconfigured for every single environment it runs on.
When your aim is to be able to roll out scripts at high speed across many different environments, this is highly inefficient and places a huge burden on operations teams. It’s exactly the sort of problem a DevOps approach seeks to solve, by addressing infrastructure lock-in at the development stage. But how can that be achieved in practice?
Containerization has been something of a game-changer for DevOps because it enables a “write once, run anywhere” approach to development. Containers are a type of virtualization. But whereas virtual machines like VMware work by creating abstract versions of complete servers which include their own operating system, containers abstract higher up in the stack at the application layer.
Image courtesy of https://cloud.google.com/containers/
One of the consequences of this is that containers break the intrinsic link between application and infrastructure. The name ‘container’ is itself an analogy borrowed from shipping containers – boxes built to a standard dimensions which make it easy to move the goods within from one mode or transportation to another.
In software terms, containerization similarly makes application deployment across different environments much more agile and efficient Developers can take a script, either for a full app or for a particular function of an app or just for a small patch, bundle it in with all the configuration files (including APIs), libraries and dependencies it needs, and end up with a lightweight, self-contained, self-sufficient asset that can be run anywhere. Packaged like this and deployed using a container platform like Docker, scripts can run successfully on multiple infrastructures, physical and virtual, solving the problem of portability across different environments.
From a DevOps perspective, containers therefore help to solve some critical issues for both development and deployment. Developers don’t have to worry about re-scripting the same code for different production environments (like rearranging shipped goods for a different mode of transport), while production and operations are likely to face far fewer issues with configuration, especially when confronting multiple cloud environments. In addition, the portability of containers makes it much easier to pass scripts back and forth between development, testing and production, aiding collaboration. For the production of cloud-native applications, the increased speed and agility this results in helps to accelerate innovation – time previously spent on reconfiguring and bug fixing for different environments can now be spent focusing on upgrades and improvements.
Kubernetes: What it is and Why it Matters
Containerization, then, resolves some of the challenges facing DevOps teams when it comes to the ‘horizontal’ scaling of applications, or deployment across multiple platforms. By abstracting applications from infrastructure, and bundling in scripts for functions with code for ‘run anywhere’ configuration, containers help to build a bridge between development teams focusing on app configuration and operations teams focusing on infrastructure configuration, which often leads to conflicts in priorities.
But what about vertical scalability? For modern digital enterprises, how fast and effectively you can launch a single app across multiple environments is not the issue. What really matters is the speed and efficiency with which you can roll out a continuous stream of many different applications, at scale, which work no matter what the platform.
It is here that DevOps teams run into one major drawback with containers. While it is relatively straightforward to configure a single container to work reliably in any given IT environment, configuring containers to work together is much more complex. So if you have a launch cycle which involves, say, running updates and patches for 100 different applications at any one time, you still have to duplicate the configuration work in at least 100 different containers. And if you push further into a microservices approach, where single applications are sub-divided into separate functions or ‘services’, each run from a separate container, the complexity is multiplied even further.
When you’re dealing with app production at high volumes, or with large, complex, multi-container applications, coding and recoding configurations for every single environment in every single container remains a cumbersome and time-consuming task. To carry on the shipping analogy, it’s like loading containers manually – yes, the containers are better than unloading and reloading items individually, but it is still laborious, long-winded work.
This is why, for DevOps teams looking to take agility to the next level, Kubernetes has become a go-to solution.
Kubernetes is a container orchestration tool developed by Google. According to the Kubernetes website, it functions as “a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.” Its name is Greek for helmsman or pilot, referring to the fact that it supports teams in navigating their way through the management of containerization.
Kubernetes brand icon.
Google was an early adopter of containers. As well as the advantages of bundling in environment configuration with the application code during development, the IT giant recognized that breaking everything down into small, discrete, manageable chunks of code could have major benefits in terms of agility and productivity – the less developers had to focus on, the more they could achieve. The only problem was, running software services at the scale of Google search, Gmail, YouTube and the rest, Google soon realised it was having to juggle billions of containers with a continuous cycle of updates and deployment. Having broken everything down into bite-sized pieces, it needed a way to reassemble them all again, and move them where they needed to go, efficiently and effectively, at a global scale. Kubernetes was its answer.
Kubernetes can be understood as a workload automation resource for teams working with containers. It resolves some of the key challenges DevOps teams face when working with containers at any sort of scale and complexity – managing, scheduling and networking clusters of containers within a microservices architecture, identifying and fixing issues within individual containers in a cluster to improve redundancy, speeding up configuration and deployment when you are trying to update dozens of different component parts at once.
Kubernetes is sometimes described as being “designed for deployment”, providing a natural balance to the benefits containerization brings to development. There are certainly elements to Kubernetes which live up to this billing – for example, the fact that an application can be installed and run using a single command. From an operational perspective, the big advantage of Kubernetes is that it provides a ready-made tool set for managing the entire application lifecycle from deployment onwards, saving teams the trouble of building their own infrastructure configuration solutions and making automation standard.
However, like containers themselves, the benefits of Kubernetes are just as relevant to development as they are to deployment and operations, and it is therefore perhaps better to think of it as a solution ‘designed for DevOps’. The fact is, in a true DevOps team, it falls to developers to script solutions for operational issues before an application reaches the deployment stage, or at least come up with solutions to identified problems in subsequent iterations. From load balancing to examining logs to running system health checks to privacy and authentication, Kubernetes provides ready-made solutions, often run with a single command, to operational requirements that can pose major stumbling blocks for developers trying to get their applications production-ready, especially in multiple environments. Rather than worrying about how to make all of these functions work in different private cloud, public cloud, bare metal or hybrid environments, Kubernetes gives developers the space to focus on core application functionality – a little like they did in the pre-DevOps days.
In summary, containers and Kubernetes are often described as offering DevOps teams ‘best of both worlds’ solutions for efficient software production at scale – the flexibility of being able to decouple how an application performs from specific infrastructure, therefore enabling development for multiple environments, combined with a set of tools that make deployment and operational management as straightforward as they would be if you were only having to consider one environment.
On their own, containers deliver the twin benefits of re-engineering the relationship between application and infrastructure and also breaking down application functions into smaller modular units. With single configuration scripting, this enables “write once, run anywhere” scripting, and also leads to faster, more agile development, with more scope to focus on perfecting what the application does rather than problem solving how to make it run.
But when it comes to more complex, multi-container applications and large scale distribution, you also need a solution for managing how containers are integrated and deployed. By simplifying configuration and deployment in any environment using straightforward code, Kubernetes provides the portability and consistency that smooths the path for applications to pass through development, testing, sys-admin, production and operations without a glitch. Completely agnostic, Kubernetes will run containers coded in any language, intended for any platform, providing DevOps teams with supreme flexibility. And thanks to its use of automation, it helps to speed up DevOps cycles, supporting continuous development and, by removing the complexity of handling large number of containers at once, a microservices approach.
Posted by: Camila Panizzi Luz