Most people have heard the mantra that containers should be immutable and infrastructure should be stateless. The problem is there really is no such thing as a stateless architecture. Instead, it’s just someone else’s problem; either you’ve outsourced to a managed service like Amazon’s AWS, or you have an internal Ops team that have to make decisions about storage. Either way, someone has to deal with your storage needs, assuming that you want to do something useful with your applications!
The next problem is that there’s more than one storage requirement. Your application binaries need ephemeral, performance storage. Your application data is what we typically think about when we’re running databases, message queues or any other kind of stateful application. Those apps need dedicated persistent performant storage, plus replication for high availability, snapshots or encryption. Configuration files need to be shared across hosts, and then for backups you may be optimizing for cost, so you would look at compression, deduplication or backing up to the cloud.
How do you find storage that can solve each of your needs depending on your use case?
Containers and Storage: Gotcha!
First off, a little revision about why we use containers. Containers are like lightweight small virtual machines. They’re fast to spin up and down and they improve developer productivity and efficiency. Docker is the leading company that provides container infrastructure and tools. The containers are API driven, which means that you can integrate these in your platforms and infrastructure as needed. Containers are the modern way to do infrastructure and it’s a trend across the entire IT industry.
There are significant benefits enterprises gain from containers and these enterprises are adopting them as part of their infrastructure. But there’s also some gotchas with containers. A big one being storage.
How would storage look in a containerized infrastructure?
The first problem that you shouldn’t have is pet storage. Consider the cattle-pet analogy for servers. You shouldn’t treat your servers like pets that you need to name and lovingly look after and when they fall over you need to nurse them back to health. Instead, you should treat them like cattle. Number them, when they fall over, take them out back and shoot them. In other words, you don’t want special storage pets, you want ordinary commodity hardware or cloud instances that act the same way and you can scale easily just by adding more nodes.
This is a particularly difficult for storage. You don’t want special servers that your storage relies on as then you’d have single points of failure.
The second problem is containers are meant to be small and lightweight, fast to spin up and down and move around the cluster. Your data on the other hand is large and difficult to move around. Your data has to follow your container around, wherever it moves around the cluster depending on your orchestrator. You don’t want to have to map containers to specific hosts for your data, because you lose the mobility and the portability of containers.
The third problem is that humans are fallible. Humans make mistakes and if you relying on an operator to run through a playbook manually, you have a much higher chance of something going wrong. For storage, you want everything to be as integrated with Docker and Kubernetes, and API driven, as expected of all other resources like networking or compete.
Why Containers are Designed NOT to be Stateful
Docker containers comprise a layered image and a writable ‘Container layer’.
Here, the base image is Ubuntu and the top right is the thin r/w container layer, where new or modified data is stored. When a container is deleted its writable layer is removed leaving just the underlying image layers behind.
It’s good because sharing layers makes images smaller and lack of state makes it easy to move containers around. It’s bad because generally you want your app to do something useful in the real world!
To solve this, Docker provided local volumes. In this case, we mount a directory from the host onto the containers, which can then access those and read and write to them. What’s good about this? You can mitigate faster than write to your local host, but now that volume is tied to that specific host, so if that host goes down your data is inaccessible. There’s no locking, so you have to be careful with consistency when you have more than one container write to the same volume. And there’s also no quality of service controls. So you’re subject to the noisy neighbor problem if some containers are more important and taking up more of their fair share of the IOPs.
To solve these limitations, Docker came up with a new way to integrate external storage – volume plugins. As an example of a full volume plugin is StorageOS. StorageOS is a software-defined, scale-out storage platform for running enterprise, containerized applications in production. For enterprises StorageOS is optimized for databases. We provide block storage with a standard file system on top, rather than file or object storage, natively integrated with Docker and Kubernetes.
StorageOS is conceptually pretty simple; it’s a virtualization layer on top of any commodity or cloud storage. From the point of view of the app container, volumes are accessible exactly the same way, across the entire cluster, without having the special pet servers mentioned before, and the storage is always highly available. It’s designed to scale horizontally by adding more nodes – new nodes simply contribute their storage into the storage pool, or, if they don’t have storage themselves, can access storage on other nodes.
Author: Cheryl Hung
Cheryl Hung is the Director of Ecosystem at the Cloud Native Computing Foundation. Cheryl codes, writes and speaks about storage, containers and infrastructure. Cheryl previously worked at StorageOS as product manager and as a Google Maps software engineer, with particular expertise in mapping and geolocation services, C++, Java and Python. She graduated from the University of Cambridge with a Masters in Computer Science and has worked in London and New York.