The NVMe/TCP target is the storage-side component that listens for incoming NVMe/TCP connections, exposes NVMe subsystems and namespaces, and services I/O commands from initiators.
The NVMe/TCP target is the server-side component in an NVMe/TCP storage architecture. It listens on TCP port 4420 and accepts incoming connections from initiators. On Linux, the target is implemented as the nvmet-tcp kernel module, available since Linux 5.0. Configuration is performed through the kernel's configfs filesystem at /sys/kernel/config/nvmet/, where administrators define subsystems, namespaces (backed by block devices or files), allowed hosts, and transport listeners.
When an initiator connects and issues a Connect command specifying a subsystem NQN, the target validates the NQN, optionally performs authentication, and establishes admin and I/O queue pairs. For each I/O queue, the target allocates a kernel worker thread (or uses a poll mode depending on configuration) to process incoming NVMe command PDUs, execute them against the backing block device or namespace, and return NVMe completion PDUs. The target supports all standard NVMe I/O commands: Read, Write, Flush, Dataset Management (Discard/Trim), and optionally Compare and Write Zeroes.
Software targets like nvmet-tcp are not the only implementation option. Storage appliances, all-flash arrays, and hyperconverged storage systems implement NVMe/TCP target functionality in their own software stacks — often with proprietary optimizations for their specific hardware. User-space target implementations (e.g., SPDK's NVMe-oF target) can achieve higher performance than kernel targets by using poll-mode drivers that avoid interrupt overhead, at the cost of dedicating CPU cores to storage servicing.
The target is the storage endpoint in every NVMe/TCP deployment. Its performance characteristics — throughput, IOPS, latency, and concurrent connection capacity — directly bound what initiators can achieve. A well-configured target can serve multiple initiators simultaneously, each accessing different namespaces or sharing the same namespace (with namespace sharing enabled). In distributed storage systems, the NVMe/TCP target layer is typically replicated across multiple nodes to provide HA, with the replication logic sitting below the nvmet layer in the storage software stack.
nvmet-tcp module (mainline since kernel 5.0)/sys/kernel/config/nvmet/
The Linux kernel nvmet-tcp target is the standard choice for general-purpose deployments — it requires no special configuration beyond loading the module and writing to configfs, and it coexists with all other kernel workloads. The SPDK (Storage Performance Development Kit) NVMe-oF target is designed for maximum performance: it uses DPDK poll-mode drivers to process network packets in user space without interrupts, eliminates system call overhead, and pins dedicated CPU cores to the I/O path. SPDK targets can achieve 10–20M IOPS per host at sub-20 µs latency, making them appropriate for purpose-built, high-density NVMe/TCP storage appliances.