https://upload.wikimedia.org/wikipedia/commons/thumb/a/a6/Careful_now_-_Flickr_-_Dyroc.jpg/1024px-Careful_now_-_Flickr_-_Dyroc.jpg

Cinder on Cloud VPS

This post explores the new Openstack Cinder feature for Cloud VPS.

By Andrew Bogott, Senior Site Reliability Engineer, The Wikimedia Foundation

We’ve just added a new, long-overdue feature to Cloud VPS: attachable block storage, aka Openstack Cinder.

Historically, storage of large files and datasets has always been the weak link in the stack provided by Wikimedia Cloud Services. If you wanted to use disk space beyond the default 20Gb allocated for a new virtual machine, there were a few optionsnone of them ideal. If you wanted big, slow files (against a background of grumbling WMCS staff) you could use NFS.  If you wanted more local storage you were limited to the available nova flavors; often choosing a large flavor meant using up your RAM or CPU quota when all you really wanted was a larger drive.

Starting in 2021 we’re offering Cinder as a third, better option. Cinder is a core OpenStack project that provides attachable storage, a standard feature in most public clouds. With Cinder you can create a standalone volume of arbitrary size, then attach that volume to a VM. When it comes time to rebuild a VM, you can detach the volume and re-attach it to a newer, upgraded VM. Better yet, Cinder volumes can be extended as they fill up.

Our hope is that Cinder will immediately replace some of the simpler NFS use cases. NFS is likely to remain state of the art for shared read-only use cases (e.g. dumps), but every time we can eliminate a read/write NFS share the rest of cloud-VPS works a little bit better for everyone.

Explore the documentation about creating and managing Cinder volumes (and a custom formatting/mounting tool).

If you’re making a new database, put it on a Cinder volume today! The old local-storage/LVM setup is still supported, but over the next few months, we hope to replace all existing automated uses of LVM so that we can standardize all nova flavors on a single 20Gb size and move all additional storage needs onto Cinder workflows.

Currently, each Cloud VPS project has a Cinder quota of 10Gb. Our plan is to be generous in granting quota requests in the short term. Eventually, we may settle on a higher default quota once we know more about how (and how much) our users are relying on this feature.

NFS: Shared volumes for dumps, scratch work, or shared cross-project storage

Upsides
NFS files are shared between multiple VMs and so persistent after VM deletion or failure.
NFS supports the creation of gigabyte-scale files or datasets.
Downsides
NFS provides extremely slow IO access due to being a single conduit shared among many VMs.
NFS is prone to toxic inter- and intra-project interactions. If your VM overwhelms or breaks NFS, it breaks NFS for everyone in your project, and potentially all other projects that use NFS.
The more you use NFS, the more your sysadmins hate you. NFS is comically, famously hard to maintain and keep stable. There was a whole chapter devoted to NFS in 1994’s The Unix Hater’s Handbook (link: https://en.wikipedia.org/wiki/The_UNIX-HATERS_Handbook) and several of the issues raised back then are still with us today.

Local Storage + LVM: Extended storage partitioned from the primary instance volume

Upsides
LVM partitions are local to the VM, so provide the same performance as any other local file access.
Puppet classes exist to support simple, consistent workflows.
Downsides
LVM storage is bound to a given VM, so is lost when the VM is deleted or damaged.
Extending existing storage space requires resizing a VM to a new flavor. Resizing is somewhat risky and depends on the availability of an appropriately scaled flavor.
OpenStack doesn’t provide quota management for local storage. That means that in order to conserve disk space admins need to ‘quota by proxy’ by managing the available RAM:storage and CPU: storage ratios and hoping that things work out.
Our LVM workflow is largely unique to Wikimedia Cloud VPS. Anyone with familiarity with another public cloud will need to abandon their existing knowledge and expectations and start over.
The LVM model, as currently designed, doesn’t provide much flexibility in terms of storage size or organization. You get one big volume that uses the rest of the available space, where ‘available space’ is determined by instance flavor.
We’re phasing out this feature in favor of Cinder volumes.

Cinder: Attachable block storage managed with Horizon.

Upsides
Cinder supports arbitrarily-sized volumes.
Cinder storage uses the same backend as local VM storage so should provide similar IO performance.
Cinder devices can be detached from one VM and attached to another. Storage can survive corruption, failure, or deletion of the VM it’s attached to.
Attachable block storage is the norm in most public clouds. New users will expect us to support it, and now we do.
Cinder provides much more flexible arrangements of storage: you can attach multiple volumes to a single VM, and volumes can be extended as storage needs increase.
Cinder supports per-project storage quotas, so admins can allocate space, RAM, and CPUs to a project according to actual need rather than according to approximations dictated by available flavors.
Downsides
Some existing workflows (in particular, puppetized use cases) are not yet supported for Cinder. Transitioning from LVM to Ceph storage for existing projects may be cumbersome, requiring new puppet classes that somehow detect whether a VM is pre- or post-Cinder.
A Cinder volume can only be attached to one VM at a time. Shared storage is going to remain the exclusive domain of NFS for the foreseeable future.

About this post

Featured image credit: Careful now, Cory Denton, CC BY 2.0

Leave a Reply

Your email address will not be published. Required fields are marked *