8 Questions from the Kubernetes Master Class on Deploying WordPress and MySQL without Data Loss

Recently my colleague Ferran Arau Castell (@FerranArau) presented a Kubernetes Master Class on deploying WordPress and MySQL without data loss. You can watch the Master Class at the bottom of this blog. In the meantime, here is a recap of some of the questions we were asked.

1. Is there a way to create volumes with unique names? For example, if you create two 10 GB volumes for different deployments, how do you know which volume is for which deployment?

Yes. It’s easy to see. When we create a persistent volume claim, we set a name.

PVC

When we create a volume claim template, through a stateful set, that name is enforced from the stateful set name itself. This translates to persistent volumes. StorageOS dynamically provisions the persistent volumes.

persistent volumes

2. How are you doing session management?

Session management with WordPress happens in the cookies of the browser. It’s not really a session. It puts a cookie in the browser, that cookie is ciphered and in the WordPress config there is salts and those are used to cipher cookies and requests. However, many applications don’t have this level of statelessness.

If you need sessions because you are implementing your own user API, microservices for WordPress or you’re using an e-commerce plugin that needs sessions, you’ll need a session store somewhere in the infrastructure. I would discourage the use of the filesystem and highly discourage the use of the MySQL. My recommendation is to use Redis or any in memory key value store. Redis is easy to deploy in Kubernetes and it is really lightweight. Then you can configure the php sessions handler to use the Redis url service.

3. Networking options – Flannel, Project Calico, Traefik – can get so confusing with networking. Can you clarify what you used?

Rancher by default makes our life really easy. It lets you choose the CNI plugin and in my case it works as I need with no configuration set.

The CNI providers have done such a great job, I mostly don’t need to worry about it. As soon as I provision a cluster with Flannel or in Amazon I use Calico plus Flannel on EKS, it usually just works. If you need high-end functionality for TLS or encryption or you want to change the way packets look, you’ll need to get more involved.

StorageOS runs in the HostNetwork. Why? We avoid using a CNI, the overlay network for performance. We use between 5700 and 5711 port. Those ports are bound to the host. We run one StorageOS pod per node, so the StorageOS node have the IP of the node.

4. Can you talk more about networking ingress?

I use ingress for test clusters and development mostly and use LoadBalancers for exposed productions services. You put one load balancer in Google Cloud, AWS, etc. It’s backed by an ingress controller – say NGINX. The NGINX ingress controller will receive those requests from that external LoadBalancer and from there it will dispatch the traffic to a specific service.

5. Let’s say you need to make a quick change to the theme or some other file without using the WordPress dashboard UI. What’s the best practice for getting access to the file system and making changes?

First rule of any systems engineer or DevOps – don’t make changes in production!

Usually what you would have is the source control repository that pushes the code into your CI/CD pipeline. Whether it’s Jenkins in Kubernetes or RIO in Rancher, that CI/CD pipeline builds a container. That container changes and those changes are released by the deployment.

In Kubernetes, we can see that a new replica set managed by the deployment is created. The old pods are running, but by changing the image Kubernetes knows that there are new pods. When the new ones are ready, the old pods will stop.

6. What about network limitation reservations? How do ensure there is enough throughput in a shared network for StorageOS? For example, traffic form node hosting PV to a node hosting NFS.

StorageOS uses host network. The network we have is the network we use. If the network is slow or saturated, the operations are going to be slower. There is no way around it at the moment, but we have some new features in development to help with this.

The StorageOS toplogy of data provisioning is decentralized. Also every RWX volume has its own shared file server, which mitigates bottle necks of common shared filesystem topologies. Instead of serving the data from one NFS share only, StorageOS leverages Kubernetes to provision many shares automatically.

7. Can we use StorageOS for this set up: we have a legacy microservices application with several databases that run on a custom orchestrator with tight coupling to the host storage layer. For backup and sync, we use device mapper snapshots to manage database state. Can we use StorageOS for that in Kubernetes?

Yes. StorageOS will use any back-end device that you have. If you have JBod or any other back-end data store, you can give that to the StorageOS as a mount point. StorageOS bind mounts the container to the host, so we can use any device available, any NVMe, SCSI, JBod, VMDK, etc. As soon as you give that to StorageOS, we will use that back-end capacity.

For legacy applications, we are working to abstract this problem from legacy apps. We want to help you avoid this heavy lifting yourself through software.

8. What are the advantages of StorageOS compared to a managed service like AWS EFS or database as a service like AWS RDF?

Operational advantages and performance.

Let’s say you provision EFS in AWS:

  • You need to configure IAM for the EFS
  • You need to configure security groups
  • You need the EFS tools in the AWS AMI you are using, so you might need to bake AMIs
  • You provision a EFS resource.
  • Now you have a filesystem ID.
  • You need to go to Kubernetes and install a CSI provisioner; it’s a way to connect Kubernetes with storage providers. It will install a daemonset (StorageOS is already installed with the CSI provisioner internally).
  • You have to create the StorageClass to reference the efs provisioner
  • You create a persistent volume – it needs to have the reference to the filesystemID of the EFS provisioned
  • You create a persistent volume claim, that you reference from the persistent volume

This step need to be done for every single volume you provision.

This will take you at least a morning. If you have it automated, maybe you can get it done in a couple of hours for EVERY SINGLE persistent volume claim you create.

With StorageOS, we are helping you create persistent volume claims under demand. You have the PVC definition and create it with kubectl.

The other issue is the performance. The performance depends on the amount of data that you use – not the data you provision in EFS. Unless you use the whole capacity of that EFS, things get really slow (in my own experience).

Want to learn more? Watch Ferran’s talk on deploying WordPress and MySQL without data loss.

mm

Author: Danielle Cook

Danielle is the Marketing Director at StorageOS.

  • StorageOS Version 2.1 is Now Available

    Learn more

    mm

    Author: Danielle Cook

    Danielle is the Marketing Director at StorageOS.