The NVMe/TCP initiator is the client-side component that establishes TCP connections to NVMe/TCP targets and sends NVMe commands over the fabric to access remote storage.
The NVMe/TCP initiator is the software (or hardware-assisted) component running on a host that wants to consume remote NVMe storage. On Linux systems, the initiator is implemented as the nvme-tcp kernel module, available since Linux 5.0. The initiator establishes TCP connections to the target's IP address on port 4420, performs the NVMe/TCP Connect command exchange to identify the target subsystem by NQN, and then creates I/O queue pairs for submitting and completing NVMe commands.
The connection lifecycle begins with a fabric connect command that specifies the subsystem NQN, the number of I/O queues requested, and optionally an authentication exchange (DH-HMAC-CHAP). After the admin queue is established, the initiator creates additional I/O queues — typically one per CPU core to avoid lock contention. Each queue pair consists of a Submission Queue (SQ) where the initiator posts NVMe commands and a Completion Queue (CQ) where the target posts completions. The initiator maps these queues to TCP connections: one TCP connection per NVMe queue.
Discovery of available subsystems and transport endpoints is handled through NVMe-oF discovery. The initiator connects to a well-known discovery controller (also on TCP port 4420, or via the Central Discovery Controller model introduced in NVMe 2.0) and retrieves the Discovery Log Page, which lists all subsystems accessible from that fabric along with their transport type, address, and port. On Linux, the nvme discover command from the nvme-cli package automates this process, and the nvme connect command establishes a live connection to a discovered subsystem.
The initiator is the active side of every NVMe/TCP session: it initiates connections, drives I/O, and presents discovered namespaces to the host operating system as standard block devices. In Kubernetes environments, the CSI driver node plugin running on each worker node acts as the NVMe/TCP initiator, connecting to the storage backend's targets when a PersistentVolumeClaim is attached to a pod. The quality and efficiency of the initiator implementation — particularly how well it leverages multi-queue, io_uring, and CPU affinity — determines much of the observed storage performance.
nvme-tcp module (mainline since kernel 5.0)nvme-cli (nvme discover, nvme connect, nvme list)/dev/nvmeXnY after connectThe initiator/target distinction in NVMe/TCP maps directly to the client/server model: the initiator always originates TCP connections and issues NVMe commands, while the target listens for connections and services them. Unlike some protocols (e.g., iSCSI with iSER), an NVMe/TCP initiator never acts as a data target — data flows are always initiated by the host side. This asymmetry simplifies firewall configuration: only the target needs a listening TCP port (4420); initiators connect outbound and require no inbound firewall rules.