Data Locality for Stateful Workloads in Kubernetes

Data Locality is the newest feature from StorageOS that improves performance for workloads in Kubernetes. Here we consider what Data Locality is, how it works, and how it improves performance for applications.

What is Data Locality?

StorageOS volumes can be hosted by any node in the cluster and accessed by applications running on any other node in that cluster. This helps ensure high availability, makes pod scheduling decisions easier, and is suitable for general workloads.

However, for applications using remote volumes, there is a cost. The network round trip time adds IO latency, and of course that latency varies based on other network traffic in the system.

For some high performance or mission critical applications, we’d like that latency to be as low as possible, and, importantly to be as deterministic as possible. Why is determinism important?

Imagine a database which is powering a website. Each time a user clicks on the website, the database makes many requests to disk to read or write information. Each of those requests takes a certain amount of time – the latency. In our example, the database issues 20 sequential IO requests, which take a minimum of 2ms and a maximum of 4ms to complete. The cumulative latency for these IO requests will be between 40 and 80ms. 

If we consider the same system with a minimum IO latency of 2ms, but a maximum latency of 10ms, then our possible response times now vary between 40ms and 200ms.

The more predictable the latency is, the more deterministic we can say it is.

Response times of less than 100ms in user interfaces are typically perceived as instantaneous, whereas anything above 100ms is noticeable. For a consistent user experience, we’d like all our page clicks to respond in under 100ms, not just the fastest ones. This consistently smooth performance where a system behaves predictably is why deterministic IO latency is important for many applications.

For applications requiring the lowest and most deterministic IO latency, co-locating an application with its volume is beneficial. This eliminates the network from the IO path, reducing round trip time and variability. We refer to this co-location as ‘Data Locality’.

Data Locality and StorageOS

In Version 1.5 of StorageOS, we’ve introduced the Data Locality feature. It allows Kubernetes to orchestrate resources in a ‘storage-aware’ way, automatically scheduling applications on the same node as their associated data. 

To satisfy the demands of different workloads, we’ve implemented the feature with two modes. 

  1. The first, ‘preferred’, places pods adjacent to their volumes on a best-efforts basis. If pods cannot be scheduled locally, we allow remote placement.
  2. The second, ‘strict’, places pods adjacent to their volumes, or prevents scheduling entirely (the pods remain in pending state). The idea here is that for some workloads such as distributed databases or message buses, it might be preferable to avoid scheduling a pod that would run more slowly than its peers, ensuring all running instances respond with similar latency. 

We designed these modes to implement RFC SHOULD and MUST semantics, respectively.

How it Works

There are two components which make up our Data Locality feature.

  1. First, we implement a Kubernetes Scheduler Extender. This interacts with the standard Kubernetes scheduler to influence where pods are placed when scheduling decisions are made. Broadly speaking, it acts as a filter which constrains placement of pods to nodes which contain the relevant StorageOS volumes.
  2. Second, our Mutating Admission Controller watches all new pod creation requests, and for pods which mount StorageOS volumes, mutates the specification such that the scheduler extended is invoked.

These two components act in concert to ensure that, where required, Data Locality is observed.

Benefits of StorageOS Data Locality

If you are working on high performance or mission critical applications and need latency to be as low as possible, using StorageOS Data Locality can help. By co-locating applications with the volumes they consume, you’ll avoid network roundtrips for your performance sensitive workloads.

Try out Data Locality, using our self-evaluation guide. As always, we’d love to see feedback – please do come and join us on our public Slack channel.

mm

Author: Paul Sobey

Paul Sobey is Head of Product at StorageOS. Paul has worked as a systems and infrastructure engineer for over 15 years, responsible for deploying cloud and on-premises infrastructure as well as deploying Kubernetes and containers in production.

  • StorageOS Version 2.0 is Now Available

    Learn more

    mm

    Author: Danielle Cook

    Danielle is the Marketing Director at StorageOS.