Food court sign

Cooking New Patterns with Storage for Containers

Separation of concerns is a fundamental pattern to those in the business of producing quality software and especially distributed systems (micro-services anyone?). At the same time, we are all in the hurried race to zero:

  • zero runtime latency
  • zero development time
  • zero deployment time
  • zero downtime
  • zero jailtime (or fines paid for lack of compliance)
  • zero cost

The closer our tools of choice get us to those zero targets, the happier our lives will be – our time will be spent in more meaningful and efficient ways while on earth. In my case, I can dance more. You might find yourself having more time for surfing, or perfecting your vegan fettuccini alfredo recipe.

Efficiency and reliability is key for software development and distributed systems. At the same time, it is important that solutions are relatively-easy to build, configure and use. So, let’s consider what’s needed for this: starting with a restaurant analogy (to add some imagination and because I’m a foodie).

Decoupling Your Storage (And Salad?)

When decoupling systems, it is important to find a balance between what is possible to separate, and what is sensible to separate.

Consider a restaurant:

It is possible to have a different chef for every salad ingredient. In most imaginable scenarios, this isn’t sensible.

Having specially trained chefs who apply their skills to a range of ingredients and recipes makes more sense -as does having specialty restaurants. What then is a food court, if not an opportunity to share common concerns and become slimmer in the implementation layers of each offering? (Ah, but does this optimization make for the best dining experience? What if the common concerns could be supplied in a highly adaptive, customized, and transparent way such that the best dining experience could still be provided?)

Whatever the menu, all restaurants need to keep track of state – storage of ingredients, materials or records – to run the business. Each restaurant will decide how best to handle the coupling together of various stateful elements. It will be useful to classify the stateful elements and consider how best to treat each class.

Do you store the architectural drawings in the same place you keep the chef’s schedules?

What about keeping the Chef’s uniform always in the kitchen: so that when she arrives, it is waiting for her use?

What happens when you modernize storage utilized by an application?

Containers are widely used when decoupling applications. The nature of containers encourages IT staff to take another look at their applications and consider ways to define and deliver composable services for additional flexibility and reuse. I believe adding an application-centric and smart storage solution to the deployment of containerized applications makes very good sense. The solution needs to assure that the state is managed at a reasonable cost in terms of time, complexity, disruption, dependencies, and total cost of ownership.

With the addition of a separate storage layer, additional lifecycle and state management options become available. If storage that is external to a container is long-lived and reliable, one could deploy a nearly empty container that fires off a start script and utilizes a mapped path to a pre-existing volume that holds all the necessary bits required to run the desired application.

If the storage solution also allows for transparent compression, encryption, and replication of the state it manages, does it become more or less reasonable in each scenario? Should the chef’s hat be collapsed down to a small disk and locked up using AES 256?  What if there is zero additional cost to doing so?

How do we classify the storage menu?

The following are forms of information that will require some stateful solution(s):

  • The runtime executables for an application: (the kitchen and the chef)
    • Libraries
    • Binaries
  • The output of that application: (the dishes (as they are delivered and being prepared), the ever-changing guest list, the staff, and the changing available ingredients in stock)
    • Deliberately Managed State (DB tables, message queues, registry records for lifecycle management of records or files etc)
    • Logs
  • The input for that application: (the restaurant size, color scheme, hours of operation, local suppliers, and seasonal menu items / regional specials)
    • Configuration files
    • Start script(s)

Imagining our restaurant as a software scenario, the desired behavior is: destroy one restaurant and another may be configured to rise in its place. (Although it may have to be on the next block if there was a water-main break). The restaurant is composed of many elements that work together and may or may not have their life-cycles tightly coupled.

Let’s look at how we want to treat the runtime executables – the functions and business logic (the Kitchen and Chef):

Let’s say our application gets started on nodeA and it fails. When it starts again on nodeB, thanks to an intelligent orchestrator like Kubernetes or manual failover, it should run the same version of the application as before to allow for compatibility with other components/applications. We want to make sure the French pastry chef doesn’t end up in the fast food kitchen. The state that determines these things is often stored on a file system as a dynamically downloadable image that requires only a name to be properly identified.

An orchestrator using containers that is properly configured makes sure the correct version of the application gets started up and in the correct: quantities, namespace, and region. That takes care of the binaries and the libraries as mentioned above.

What about the application output?

In the case of deliberately managed state: clearly, if an application such as a database or messaging solution is compelled to make a deliberate change to the values in its rows of data or message queues, we do not want that change to be lost in the event of a container or machine failure!

Again, in our restaurant, if there is a relocation event, diners, after only a brief moment of sensory deprivation, will be delighted to discover that their meals remain on their plates and their dining companions remain the same. Their familiar-looking waiters remember to deliver dessert and coffee and return with the correct change and receipts for completed purchases. While much of this state is regularly organized through a specialized application layer such as a database, the actual responsibility for persistence lies below that layer and if augmented may provide more flexibility, security, and redundancy to the entire application. Proximity of the storage is also important to consider as the restaurant phases in and out of existence. Colocation would simplify the syncing of the customer orders and the resulting changes to available ingredients and would make it possible to allow the restaurant to work on top of mountains as well as on ocean going vessels, without altering the day to day experience of the chef and patrons.

What about the logs?

Writing logs into the container’s writable layer is a bad idea because you can’t rely on the data to live on if the container stops running or the host machine suffers a failure. In contrast, we can really benefit from making sure logs outlive the container and its host, because in the event of failure there may be clues in the logs that can resolve issues (that’s kind of the point of logs). In addition, for compliance reasons, it may be essential to your business that anything that records the activities of users of the applications be long-lived and able to outlive a failure event. This puts logs in the same category as deliberately managed state.

In the restaurant: if a guest needs a copy of their receipt or left their umbrella in the restaurant, we expect that the manager will be able to locate them both and (in some way outside the normal service channels) make them available for retrieval.

Handling configuration files & start scripts

This is interesting. In most cases, these can travel with the binaries and libraries to form the runtime behavior for an application. They could also be managed as part of the orchestration layer which doubles as a behavior-influencing instruction storage solution. A case could be made for separating some of the start scripts or configuration files and managing them as a kind of scaffolding that is stored long term as part of the infrastructure – allowing it to be managed by a different group.

If the start scripts and configuration could be tweaked and stored outside of the container in a reliable and redundant storage layer, then the same container could be reused across multiple deployment environments and be deployed with minimal awareness and intelligence. The owner of the storage layer could be the owner of the environment-specific tweaks necessary to make it work – potentially making the application container (and its owner(s)) completely unaware of its runtime needs. Of course, as storage solutions become more application-centric, the one providing adjustments to the configuration no matter where it is stored, may just be the application owner.

If the configuration is stored using encryption and has a distinct lifecycle from that of the application container, it may be that this additional level of control and security could be beneficial to some extremely risk averse or operations-challenged organizations. The application may not be ready to fully adapt to the mechanics of an orchestrator framework due to legacy conditions that force it to read its instructions from a particular storage solution and persistence API.

Whatever choices are made regarding the storage and lifecycle management of the configuration for an application, it is worth noting that enabling consistency across every application that writes and reads from disk may be the key to simplifying and adding resiliency to deployment and runtime processes. I leave you to ponder within the context of your current business and IT challenges if creating your own super food court makes sense to you.  I also ask you to consider if you are truly able to deploy stateless applications, or if you are stretching your state across several islands of technology and in so doing adding needless complexity to your world. What about the application lifecycle you manage could change if state management, as implemented by smart storage, could follow and adapt to the changing environments and needs of the application rather than keeping you stuck in a semi-agile world?

This switch to a more agile storage solution has a surprisingly far-reaching scope of potential benefits and side-effects that come at nearly-zero cost in terms of disruption and complexity. This new separation of what state is stored from what is doing the storing, can be enabled in a few minutes for any application running in a container. When you add to this the possibility of ‘smart storage’ that leverages a rules engine for automated policy enforcement, you lessen the chance of paying a fine when the compliance audit comes to town.

In future posts, I will endeavor to demonstrate the relative ease and remarkable efficiency of this approach. Until then, get yourself a delicious snack to enjoy while you try out our community-friendly tutorials.

Access the StorageOS Playground

mm

Author: StorageOS

  • StorageOS Version 2.0 is Now Available

    Learn more

    mm

    Author: Danielle Cook

    Danielle is the Marketing Director at StorageOS.