We are pleased and proud to announce the release of StorageOS v2.3, containing our ReadWriteMany solution for accessing volumes from multiple application pods at the same time. The standard access mode for PVCs within Kubernetes is ReadWriteOnce (abbreviated to RWO), which specifies a 1:1 mapping between a volume and its consuming application. This access mode is suitable for most cloud-native workflows such as databases and message buses. ReadWriteMany (abbreviated to RWX) permits multiple applications to safely access the same volume, enabling different types of workloads to run in Kubernetes.
There are many use-cases for ReadWriteMany volumes. Some applications can benefit from horizontal scaling. Imagine a web-service where static content on a backend volume is merged with dynamically generated content at run-time. Scaling such a service by adding extra worker daemons might be highly desirable, and of course with Deployments in Kubernetes, scaling is very easy. ReadWriteMany volumes can be used to add HA to some types of application. A good example here is the OpenShift Registry from Red Hat, which mandates that its backing storage is provided on a ReadWriteMany volume (see for an example).
Some cloud-native applications can use ReadWriteMany volumes to expose novel functionality. Kubevirt can enable a more effective form of Live Migration using ReadWriteMany volumes. For service providers considering hosting virtual machines on Kubernetes, this can lead to faster failover times, and ultimately a more satisfying experience for their customers.
Another common use-case for RWX volumes is to enable so-called ‘legacy workloads’. These are applications, or groups of applications, which have been written with access to a shared filesystem in mind. Typically, the filesystem is used as a form of message bus to exchange data between multiple processes. Another pattern is where escort processes, such as those to upload files, or allow manual inspection of files in a chain, sit alongside the main application(s). During my time in the quantitative finance industry, I saw these sorts of patterns in frequent use in transaction processing systems. Similar stories exist in every industry.
Why deploy such a workload within Kubernetes? The reasons are the same as for any application we run in Kubernetes. Containers are extremely convenient ways to package applications with their dependencies. Orchestrators are extremely good at deploying and running those applications. Packaging, deployment and orchestration are difficult problems to solve well at scale, and Kubernetes gives us that functionality out of the box. Declarative syntax and self-service workflows are convenient for engineers – allowing us to focus on the desired end state, leaving the implementation detail to the orchestrator.
The ‘cloud-native’ answer to running these applications in Kubernetes is of course to re-architect them. Store data in a database, use a discrete message bus to move that data between applications, and so on. Why not do this for all applications? There are several reasons, but they usually boil down to time, and money. Many legacy applications are large, and have accrued months or years of development time. Re-writes are expensive – and it can be difficult to justify the cost to effectively replicate existing functionality. There may be time pressure to migrate applications – perhaps a new Kubernetes cluster is the proscribed ‘company best practice’ to run applications, and old hardware is due to be decommissioned. Perhaps an ambitious and forward thinking CTO has mandated a target to move all applications to Kubernetes within a defined timeframe. In all these cases, it is impractical to re-write applications. ReadWriteMany is the enabling technology that allows us to move all stateful applications to Kubernetes, irrespective of their cloud-native architectural credentials.
How does StorageOS enable ReadWriteMany volumes? As with all our features, we leverage a combination of our high speed control plane and the features of Kubernetes. Each ReadWriteMany volume starts its journey as a standard StorageOS ReadWriteOnce volume, managed by our end to end storage engine as per usual. To provide shared access to the volume, we use an NFS service running in userspace. Each ReadWriteMany volume is mounted by an associated NFS service provisioned within the StorageOS container, on the same node as the volume master.
Once the NFS service is running, the StorageOS API Manager maintains a Kubernetes service endpoint (https://kubernetes.io/docs/concepts/services-networking/service/) which points at the NFS network listener. When mount requests for the volume come in, our CSI driver orchestrates mounts of the volume using the NFS protocol at the service endpoint.
Our implementation provides some compelling advantages. StorageOS’ ReadWriteMany volumes provision and mount just as fast as our ReadWriteOnce volumes – significantly faster and with less configuration overhead than some competing cloud based offerings.
Having the NFS service closely coupled to the state of the underlying volume allows us to react quickly in the case of node-failure. When a replica is promoted using the standard StorageOS mechanisms, we can migrate the NFS service to the new master seamlessly – far faster than the Kubernetes control plane would manage alone. The last step – migration of the service endpoint – is handled by our API Manager with similar alacrity. Because client mounts are directed at the service endpoint, failovers of StorageOS RWX volumes are just as seamless as for ReadWriteOnce volumes.
Our ReadWriteMany volumes are fast at runtime too – we use the NFS 4.2 protocol, which introduces some interesting performance enhancements to the protocol.
Finally, because of our agnostic architecture, we fully support the mounting and replication of ReadWriteMany volumes across AZ boundaries – an advantage in many cloud environments.
In common with all our features, we have performed extensive testing on our ReadWriteMany feature, both automated and manual, to ensure it performs as you, our customers, expect. We are proud to deliver this new functionality, and confident that StorageOS RWX volumes enable whole new suites of stateful applications to be migrated to Kubernetes
Author: Paul Sobey
Paul Sobey is Head of Product at StorageOS. Paul has worked as a systems and infrastructure engineer for over 15 years, responsible for deploying cloud and on-premises infrastructure as well as deploying Kubernetes and containers in production.