vfio-user is a framework that allows implementing PCI devices in userspace.
Clients (such as qemu) talk the vfio-user
protocol
over a UNIX socket to a server. This library, libvfio-user
, provides an API
for implementing such servers.
VFIO is a kernel facility
for providing secure access to PCI devices in userspace (including pass-through
to a VM). With vfio-user
, instead of talking to the kernel, all interactions
are done in userspace, without requiring any kernel component; the kernel VFIO
implementation is not used at all for a vfio-user
device.
Put another way, vfio-user
is to VFIO as
vhost-user is to
vhost
.
The vfio-user
protocol is intentionally modelled after the VFIO ioctl()
interface, and shares many of its definitions. However, there is not an exact
equivalence: for example, IOMMU groups are not represented in vfio-user
.
There many different purposes you might put this library to, such as prototyping novel devices, testing frameworks, implementing alternatives to qemu's device emulation, adapting a device class to work over a network, etc.
The library abstracts most of the complexity around representing the device.
Applications using libvfio-user provide a description of the device (eg. region and
IRQ information) and as set of callbacks which are invoked by libvfio-user
when
those regions are accessed.
The device driver can allow parts of the virtual device to be memory mapped by the virtual machine (e.g. the PCI BARs). The business logic needs to implement the mmap callback and reply to the request passing the memory address whose backing pages are then used to satisfy the original mmap call; more details here.
Interrupts are implemented via eventfd's passed from the client and registered
with the library. libvfio-user
consumers can then trigger interrupts by
writing to the eventfd.
Build requirements:
meson
(v0.53.0 or above)apt install libjson-c-dev libcmocka-dev
oryum install json-c-devel libcmocka-devel
The kernel headers are necessary because VFIO structs and defines are reused.
To build:
meson build
ninja -C build
Finally build your program and link with libvfio-user.so
.
With the client support found in cloud-hypervisor or the in-development qemu support, most guest VM use cases will work. See below for some details on how to try this out.
However, guests with an IOMMU (vIOMMU) will not currently work: the number of DMA regions is strictly limited, and there are also issues with some server implementations such as SPDK's virtual NVMe controller.
Currently, libvfio-user
has explicit support for PCI devices only. In
addition, only PCI endpoints are supported (no bridges etc.).
The API is currently documented via the libvfio-user header file, along with some additional documentation.
The library (and the protocol) are actively under development, and should not yet be considered a stable API or interface.
The API is not thread safe, but individual vfu_ctx_t
handles can be
used separately by each thread: that is, there is no global library state.
libvfio-user development is discussed in [email protected]. Subscribe here: https://lists.gnu.org/mailman/listinfo/libvfio-user-devel.
We are on Slack at libvfio-user.slack.com (invite link); or IRC at #qemu on OFTC.
Contributions are welcome; please file an issue or open a PR. Anything substantial is worth discussing with us first.
Please make sure to mark any commits with Signed-off-by
(git commit -s
),
which signals agreement with the Developer Certificate of Origin
v1.1.
Running make pre-push
will do the same checks as done in github CI. After
merging, a Coverity scan is also done.
See Testing for details on how the library is tested.
The samples directory contains various libvfio-user examples.
lspci implements an example of how to dump the PCI header of a libvfio-user device and examine it with lspci(8):
# lspci -vv -F <(build/samples/lspci)
00:00.0 Non-VGA unclassified device: Device 0000:0000
Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Region 0: I/O ports at <unassigned> [disabled]
Region 1: I/O ports at <unassigned> [disabled]
Region 2: I/O ports at <unassigned> [disabled]
Region 3: I/O ports at <unassigned> [disabled]
Region 4: I/O ports at <unassigned> [disabled]
Region 5: I/O ports at <unassigned> [disabled]
Capabilities: [40] Power Management version 0
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
The above sample implements a very simple PCI device that supports the Power Management PCI capability. The sample can be trivially modified to change the PCI configuration space header and add more PCI capabilities.
Client/server implements a basic client/server model where basic tasks are performed.
The server implements a device that can be programmed to trigger interrupts (INTx) to the client. This is done by writing the desired time in seconds since Epoch to BAR0. The server then triggers an eventfd-based IRQ and then a message-based one (in order to demonstrate how it's done when passing of file descriptors isn't possible/desirable). The device also works as memory storage: BAR1 can be freely written to/read from by the host.
Since this is a completely made up device, there's no kernel driver (yet). Client implements a client that knows how to drive this particular device (that would normally be QEMU + guest VM + kernel driver).
The client exercises all commands in the vfio-user protocol, and then proceeds to perform live migration. The client spawns the destination server (this would be normally done by libvirt) and then migrates the device state, before switching entirely to the destination server. We re-use the source client instead of spawning a destination one as this is something libvirt/QEMU would normally do.
To spice things up, the client programs the source server to trigger an interrupt and then migrates to the destination server; the programmed interrupt is delivered by the destination server. Also, while the device is being live migrated, the client spawns a thread that constantly writes to BAR1 in a tight loop. This thread emulates the guest VM accessing the device while the main thread (what would normally be QEMU) is driving the migration.
Start the source server as follows (pick whatever you like for
/tmp/vfio-user.sock
):
rm -f /tmp/vfio-user.sock* ; build/samples/server -v /tmp/vfio-user.sock
And then the client:
build/samples/client /tmp/vfio-user.sock
After a couple of seconds the client will start live migration. The source server will exit and the destination server will start, watch the client terminal for destination server messages.
shadow_ioeventfd_server.c and shadow_ioeventfd_speed_test.c are used to demonstrate the benefits of shadow ioeventfd, see ioregionfd for more information.
Step-by-step instructions for using libvfio-user
with qemu
can be found
here.
SPDK uses libvfio-user
to implement a virtual NVMe controller: see
docs/spdk.md for more details.
You can configure vfio-user
devices in a libvirt
domain configuration:
-
Add
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'
to thedomain
element. -
Enable sharing of the guest's RAM:
<memoryBacking>
<source type='file'/>
<access mode='shared'/>
</memoryBacking>
- Pass the vfio-user device:
<qemu:commandline>
<qemu:arg value='-device'/>
<qemu:arg value='vfio-user-pci,socket=/var/run/vfio-user.sock,x-enable-migration=on'/>
</qemu:commandline>
The master
branch of libvfio-user
implements live migration with a protocol
based on vfio's v2 protocol. Currently, there is no support for this in any qemu
client. For current use cases that support live migration, such as SPDK, you
should refer to the migration-v1 branch.
This project was formerly known as "muser", short for "Mediated Userspace
Device". It implemented a proof-of-concept VFIO mediated
device in
userspace. Normally, VFIO mdev devices require a kernel module; muser
implemented a small kernel module that forwarded onto userspace. The old
kernel-module-based implementation can be found in the kmod
branch.