NAME
nvme — NVM Express core driverSYNOPSIS
To compile this driver into your kernel, place the following line in your kernel configuration file:device
nvme
Or, to load the driver as a module at boot, place the following line in
loader.conf(5):
nvme_load="YES"
DESCRIPTION
The nvme driver provides support for NVM Express (NVMe) controllers, such as:- Hardware initialization
- Per-CPU IO queue pairs
- API for registering NVMe namespace consumers such as nvd(4) or nda(4)
- API for submitting NVM commands to namespaces
- Ioctls for controller and namespace configuration and management
CONFIGURATION
By default, nvme will create an I/O queue pair for each CPU, provided enough MSI-X vectors and NVMe queue pairs can be allocated. If not enough vectors or queue pairs are available, will use a smaller number of queue pairs and assign multiple CPUs per queue pair. To force a single I/O queue pair shared by all CPUs, set the following tunable value in loader.conf(5):hw.nvme.per_cpu_io_queues=0
hw.nvme.min_cpus_per_ioq=X
hw.nvme.force_intx=1
hw.nvme.hmb_max
BIO_DELETE
commands into a single
trip; and use the CAM I/O scheduler to bias one type of operation over
another. To select the nda(4) driver, set the
following tunable value in loader.conf(5):
hw.nvme.verbose_cmd_dump=1
SYSCTL VARIABLES
The following controller-level sysctls are currently implemented:- dev.nvme.0.num_cpus_per_ioq
- (R) Number of CPUs associated with each I/O queue pair.
- dev.nvme.0.int_coal_time
- (R/W) Interrupt coalescing timer period in microseconds. Set to 0 to disable.
- dev.nvme.0.int_coal_threshold
- (R/W) Interrupt coalescing threshold in number of command completions. Set to 0 to disable.
- dev.nvme.0.ioq0.num_entries
- (R) Number of entries in this queue pair's command and completion queue.
- dev.nvme.0.ioq0.num_tr
- (R) Number of nvme_tracker structures currently allocated for this queue pair.
- dev.nvme.0.ioq0.num_prp_list
- (R) Number of nvme_prp_list structures currently allocated for this queue pair.
- dev.nvme.0.ioq0.sq_head
- (R) Current location of the submission queue head pointer as observed by the driver. The head pointer is incremented by the controller as it takes commands off of the submission queue.
- dev.nvme.0.ioq0.sq_tail
- (R) Current location of the submission queue tail pointer as observed by the driver. The driver increments the tail pointer after writing a command into the submission queue to signal that a new command is ready to be processed.
- dev.nvme.0.ioq0.cq_head
- (R) Current location of the completion queue head pointer as observed by the driver. The driver increments the head pointer after finishing with a completion entry that was posted by the controller.
- dev.nvme.0.ioq0.num_cmds
- (R) Number of commands that have been submitted on this queue pair.
- dev.nvme.0.ioq0.dump_debug
- (W) Writing 1 to this sysctl will dump the full contents of the submission and completion queues to the console.
SEE ALSO
nda(4), nvd(4), pci(4), nvmecontrol(8), disk(9)HISTORY
The nvme driver first appeared in FreeBSD 9.2.AUTHORS
The nvme driver was developed by Intel and originally written by Jim Harris <[email protected]>, with contributions from Joe Golio at EMC. This man page was written by Jim Harris <[email protected]>.June 6, 2020 | Debian |