Kubernetes

What is CSI Driver (Container Storage Interface)?

A CSI (Container Storage Interface) driver is a standardized plugin that enables Kubernetes to provision, attach, and manage persistent storage volumes from external storage systems — decoupling storage from Kubernetes core.

Technical Overview

The Container Storage Interface (CSI) is an industry-standard specification developed by Kubernetes, Mesos, and Docker communities. Before CSI, storage vendors had to maintain "in-tree" Kubernetes plugins that lived inside the Kubernetes source repository, requiring contributors to submit code to the Kubernetes project for each storage system change. CSI moved storage drivers entirely out of tree: vendors implement a gRPC server exposing defined service endpoints (Controller, Node, and Identity services), and Kubernetes communicates with drivers through a standardized sidecar container pattern.

A CSI driver deployment in Kubernetes consists of several components. The controller plugin runs as a Deployment and implements the ControllerService RPC endpoints: CreateVolume, DeleteVolume, ControllerPublishVolume (attach), ControllerUnpublishVolume (detach), CreateSnapshot, and ListVolumes. The node plugin runs as a DaemonSet on each worker node and implements NodeService endpoints: NodeStageVolume (format and mount to staging path), NodePublishVolume (bind-mount to pod), NodeUnpublishVolume, and NodeUnstageVolume.

Kubernetes communicates with CSI drivers through a set of sidecar containers maintained by the Kubernetes storage special interest group (SIG Storage): external-provisioner watches PersistentVolumeClaims and calls CreateVolume; external-attacher calls ControllerPublishVolume when a pod is scheduled; node-driver-registrar registers the node plugin with kubelet; and external-snapshotter handles VolumeSnapshot objects. This sidecar pattern means CSI driver authors write only the gRPC service implementation — the Kubernetes integration scaffolding is provided.

How It Relates to NVMe/TCP

For NVMe/TCP storage to be consumed by Kubernetes workloads, a CSI driver is the standard integration point. The driver's node plugin is responsible for invoking the NVMe/TCP initiator (nvme connect) to attach a volume to the worker node when a pod is scheduled, and invoking nvme disconnect when the pod is terminated. The controller plugin handles provisioning (creating NVMe namespaces on the target) and deprovisioning. simplyblock.io provides a production-ready CSI driver that provisions NVMe/TCP block volumes directly for Kubernetes StatefulSets and Deployments.

Key Characteristics

  • Specification: CSI spec maintained at github.com/container-storage-interface/spec
  • Transport: gRPC over Unix domain sockets
  • Components: Controller plugin (Deployment) + Node plugin (DaemonSet)
  • Volume modes: Filesystem (ext4, XFS) or Block (raw device)
  • Features: Dynamic provisioning, snapshots, volume expansion, cloning
  • Sidecar containers: external-provisioner, external-attacher, node-driver-registrar

CSI Driver Lifecycle for an NVMe/TCP Volume

When a Kubernetes user creates a PersistentVolumeClaim referencing an NVMe/TCP StorageClass, the following sequence occurs: (1) external-provisioner calls the driver's CreateVolume RPC, which provisions an NVMe namespace on the target and returns its connection details; (2) Kubernetes creates a PersistentVolume bound to the PVC; (3) when a pod using the PVC is scheduled to a node, external-attacher calls ControllerPublishVolume, which may record node attachment in the storage system; (4) kubelet calls NodeStageVolume, which invokes nvme connect and formats the device if needed; (5) kubelet calls NodePublishVolume to bind-mount the volume into the pod's filesystem namespace. Deletion reverses all steps.

Kubernetes Storage Explained in 50 Seconds