Architecture

What is NVMe Subsystem?

An NVMe Subsystem is a logical grouping of one or more NVMe controllers and their associated namespaces, forming the fundamental management unit in NVMe over Fabrics architectures.

Technical Overview

An NVMe Subsystem is the top-level entity in the NVMe logical hierarchy. It is identified by a globally unique NVMe Qualified Name (NQN) in the format nqn.yyyy-mm.reverse-domain:identifier — for example, nqn.2024-01.com.example:storage-cluster-01. The subsystem contains one or more NVMe controllers (each identified by a Controller ID) and a set of namespaces that are attached to those controllers. A single subsystem can span multiple physical or virtual controllers to enable multi-path and high-availability configurations.

In NVMe-oF environments, the subsystem is the discovery unit: an initiator connects to a discovery controller, receives a Discovery Log Page listing available subsystems and their transport addresses, then connects directly to the target subsystem's I/O controllers. The subsystem's NQN is the primary identifier used by initiators to target a specific storage entity — equivalent to an iSCSI Target Name or an FC Target Port Name.

NVMe subsystems support Asymmetric Namespace Access (ANA), which allows each namespace to be flagged as optimized or non-optimized from the perspective of each controller. This enables active-active multi-path configurations: an initiator with two paths to a subsystem (e.g., through two different NVMe/TCP target ports) can identify which path provides optimized access and distribute I/O accordingly, while maintaining failover capability to the non-optimized path.

How It Relates to NVMe/TCP

When configuring an NVMe/TCP target, the administrator creates subsystems (each with a unique NQN), attaches namespaces to those subsystems, and configures listen addresses (IP:port) for each subsystem's controllers. On the Linux target (nvmet), this maps to the configfs hierarchy under /sys/kernel/config/nvmet/subsystems/. The initiator connects using the subsystem NQN and the target's transport address (IP and port 4420). Multi-path high availability is achieved by creating multiple controllers within a subsystem and binding them to different network interfaces, allowing the Linux multipath daemon (dm-multipath) or the NVMe multipath layer to load-balance and fail over transparently.

Key Characteristics

  • Identifier: NVMe Qualified Name (NQN), globally unique
  • Controllers per subsystem: Multiple (enables multi-path HA)
  • Namespace attachment: Namespaces are attached to the subsystem, accessible via any controller
  • Multi-path: ANA (Asymmetric Namespace Access) groups define path optimality
  • Discovery: Advertised via NVMe-oF Discovery Log Page
  • Linux config: /sys/kernel/config/nvmet/subsystems/ via configfs

NVMe Subsystem vs iSCSI Target

The NVMe Subsystem is conceptually analogous to an iSCSI target, with several improvements. iSCSI targets are identified by IQN (iSCSI Qualified Name) and expose LUNs; NVMe subsystems are identified by NQN and expose namespaces. NVMe's ANA mechanism provides standardized multi-path path optimization across all subsystem controllers, whereas iSCSI relied on vendor-specific ALUA (Asymmetric Logical Unit Access) extensions that varied in implementation across storage vendors. The NVMe subsystem model also scales better: a single subsystem can simultaneously serve thousands of namespaces to hundreds of initiators, a workload that strains traditional iSCSI target implementations.